00:00:00.001 Started by upstream project "autotest-per-patch" build number 132354 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.040 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.041 The recommended git tool is: git 00:00:00.041 using credential 00000000-0000-0000-0000-000000000002 00:00:00.043 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.069 Fetching changes from the remote Git repository 00:00:00.072 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.125 Using shallow fetch with depth 1 00:00:00.125 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.125 > git --version # timeout=10 00:00:00.175 > git --version # 'git version 2.39.2' 00:00:00.175 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.225 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.225 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.058 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.075 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.088 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.088 > git config core.sparsecheckout # timeout=10 00:00:03.100 > git read-tree -mu HEAD # timeout=10 00:00:03.116 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.136 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.136 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.215 [Pipeline] Start of Pipeline 00:00:03.228 [Pipeline] library 00:00:03.230 Loading library shm_lib@master 00:00:03.230 Library shm_lib@master is cached. Copying from home. 00:00:03.245 [Pipeline] node 00:00:03.252 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.254 [Pipeline] { 00:00:03.263 [Pipeline] catchError 00:00:03.265 [Pipeline] { 00:00:03.277 [Pipeline] wrap 00:00:03.294 [Pipeline] { 00:00:03.302 [Pipeline] stage 00:00:03.303 [Pipeline] { (Prologue) 00:00:03.506 [Pipeline] sh 00:00:03.798 + logger -p user.info -t JENKINS-CI 00:00:03.814 [Pipeline] echo 00:00:03.816 Node: CYP12 00:00:03.821 [Pipeline] sh 00:00:04.127 [Pipeline] setCustomBuildProperty 00:00:04.136 [Pipeline] echo 00:00:04.137 Cleanup processes 00:00:04.142 [Pipeline] sh 00:00:04.432 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.432 1659207 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.447 [Pipeline] sh 00:00:04.740 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.740 ++ grep -v 'sudo pgrep' 00:00:04.740 ++ awk '{print $1}' 00:00:04.740 + sudo kill -9 00:00:04.740 + true 00:00:04.753 [Pipeline] cleanWs 00:00:04.762 [WS-CLEANUP] Deleting project workspace... 00:00:04.762 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.769 [WS-CLEANUP] done 00:00:04.772 [Pipeline] setCustomBuildProperty 00:00:04.781 [Pipeline] sh 00:00:05.064 + sudo git config --global --replace-all safe.directory '*' 00:00:05.165 [Pipeline] httpRequest 00:00:05.577 [Pipeline] echo 00:00:05.579 Sorcerer 10.211.164.20 is alive 00:00:05.585 [Pipeline] retry 00:00:05.586 [Pipeline] { 00:00:05.595 [Pipeline] httpRequest 00:00:05.599 HttpMethod: GET 00:00:05.600 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.600 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.634 Response Code: HTTP/1.1 200 OK 00:00:05.634 Success: Status code 200 is in the accepted range: 200,404 00:00:05.634 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:30.730 [Pipeline] } 00:00:30.743 [Pipeline] // retry 00:00:30.748 [Pipeline] sh 00:00:31.030 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:31.046 [Pipeline] httpRequest 00:00:31.402 [Pipeline] echo 00:00:31.404 Sorcerer 10.211.164.20 is alive 00:00:31.413 [Pipeline] retry 00:00:31.415 [Pipeline] { 00:00:31.428 [Pipeline] httpRequest 00:00:31.433 HttpMethod: GET 00:00:31.433 URL: http://10.211.164.20/packages/spdk_c788bae60c94ec0e73fefeba1822410ebb68d1a5.tar.gz 00:00:31.433 Sending request to url: http://10.211.164.20/packages/spdk_c788bae60c94ec0e73fefeba1822410ebb68d1a5.tar.gz 00:00:31.438 Response Code: HTTP/1.1 200 OK 00:00:31.439 Success: Status code 200 is in the accepted range: 200,404 00:00:31.439 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c788bae60c94ec0e73fefeba1822410ebb68d1a5.tar.gz 00:06:05.424 [Pipeline] } 00:06:05.440 [Pipeline] // retry 00:06:05.448 [Pipeline] sh 00:06:05.733 + tar --no-same-owner -xf spdk_c788bae60c94ec0e73fefeba1822410ebb68d1a5.tar.gz 00:06:08.289 [Pipeline] sh 00:06:08.575 + git -C spdk log --oneline -n5 00:06:08.575 c788bae60 test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:06:08.575 e4689ab38 test/nvmf: Remove all transport conditions from the test suites 00:06:08.575 097b7c969 test/nvmf: Drop $RDMA_IP_LIST 00:06:08.575 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:06:08.575 6f7b42a3a test/nvmf: Hook nvmf/setup.sh into nvmf/common.sh 00:06:08.586 [Pipeline] } 00:06:08.599 [Pipeline] // stage 00:06:08.606 [Pipeline] stage 00:06:08.607 [Pipeline] { (Prepare) 00:06:08.621 [Pipeline] writeFile 00:06:08.634 [Pipeline] sh 00:06:08.919 + logger -p user.info -t JENKINS-CI 00:06:08.931 [Pipeline] sh 00:06:09.217 + logger -p user.info -t JENKINS-CI 00:06:09.230 [Pipeline] sh 00:06:09.514 + cat autorun-spdk.conf 00:06:09.514 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:09.514 SPDK_TEST_NVMF=1 00:06:09.514 SPDK_TEST_NVME_CLI=1 00:06:09.514 SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:09.514 SPDK_TEST_NVMF_NICS=e810 00:06:09.514 SPDK_TEST_VFIOUSER=1 00:06:09.514 SPDK_RUN_UBSAN=1 00:06:09.514 NET_TYPE=phy 00:06:09.522 RUN_NIGHTLY=0 00:06:09.526 [Pipeline] readFile 00:06:09.547 [Pipeline] withEnv 00:06:09.549 [Pipeline] { 00:06:09.560 [Pipeline] sh 00:06:09.845 + set -ex 00:06:09.845 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:06:09.845 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:09.845 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:09.845 ++ SPDK_TEST_NVMF=1 00:06:09.845 ++ SPDK_TEST_NVME_CLI=1 00:06:09.845 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:09.845 ++ SPDK_TEST_NVMF_NICS=e810 00:06:09.845 ++ SPDK_TEST_VFIOUSER=1 00:06:09.845 ++ SPDK_RUN_UBSAN=1 00:06:09.845 ++ NET_TYPE=phy 00:06:09.845 ++ RUN_NIGHTLY=0 00:06:09.845 + case $SPDK_TEST_NVMF_NICS in 00:06:09.845 + DRIVERS=ice 00:06:09.845 + [[ tcp == \r\d\m\a ]] 00:06:09.845 + [[ -n ice ]] 00:06:09.845 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:06:09.845 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:06:09.845 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:06:09.845 rmmod: ERROR: Module irdma is not currently loaded 00:06:09.845 rmmod: ERROR: Module i40iw is not currently loaded 00:06:09.845 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:06:09.845 + true 00:06:09.845 + for D in $DRIVERS 00:06:09.845 + sudo modprobe ice 00:06:09.845 + exit 0 00:06:09.854 [Pipeline] } 00:06:09.868 [Pipeline] // withEnv 00:06:09.871 [Pipeline] } 00:06:09.885 [Pipeline] // stage 00:06:09.893 [Pipeline] catchError 00:06:09.895 [Pipeline] { 00:06:09.908 [Pipeline] timeout 00:06:09.908 Timeout set to expire in 1 hr 0 min 00:06:09.910 [Pipeline] { 00:06:09.922 [Pipeline] stage 00:06:09.923 [Pipeline] { (Tests) 00:06:09.937 [Pipeline] sh 00:06:10.225 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:10.226 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:10.226 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:10.226 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:06:10.226 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:10.226 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:06:10.226 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:06:10.226 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:06:10.226 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:06:10.226 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:06:10.226 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:06:10.226 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:10.226 + source /etc/os-release 00:06:10.226 ++ NAME='Fedora Linux' 00:06:10.226 ++ VERSION='39 (Cloud Edition)' 00:06:10.226 ++ ID=fedora 00:06:10.226 ++ VERSION_ID=39 00:06:10.226 ++ VERSION_CODENAME= 00:06:10.226 ++ PLATFORM_ID=platform:f39 00:06:10.226 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:10.226 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:10.226 ++ LOGO=fedora-logo-icon 00:06:10.226 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:10.226 ++ HOME_URL=https://fedoraproject.org/ 00:06:10.226 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:10.226 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:10.226 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:10.226 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:10.226 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:10.226 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:10.226 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:10.226 ++ SUPPORT_END=2024-11-12 00:06:10.226 ++ VARIANT='Cloud Edition' 00:06:10.226 ++ VARIANT_ID=cloud 00:06:10.226 + uname -a 00:06:10.226 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:10.226 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:13.525 Hugepages 00:06:13.525 node hugesize free / total 00:06:13.525 node0 1048576kB 0 / 0 00:06:13.525 node0 2048kB 0 / 0 00:06:13.525 node1 1048576kB 0 / 0 00:06:13.525 node1 2048kB 0 / 0 00:06:13.525 00:06:13.525 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:13.525 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:06:13.525 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:06:13.525 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:06:13.525 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:06:13.525 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:06:13.525 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:06:13.525 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:06:13.525 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:06:13.525 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:13.525 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:06:13.525 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:06:13.525 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:06:13.525 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:06:13.525 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:06:13.525 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:06:13.525 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:06:13.525 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:06:13.525 + rm -f /tmp/spdk-ld-path 00:06:13.525 + source autorun-spdk.conf 00:06:13.525 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:13.525 ++ SPDK_TEST_NVMF=1 00:06:13.525 ++ SPDK_TEST_NVME_CLI=1 00:06:13.525 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:13.525 ++ SPDK_TEST_NVMF_NICS=e810 00:06:13.525 ++ SPDK_TEST_VFIOUSER=1 00:06:13.525 ++ SPDK_RUN_UBSAN=1 00:06:13.525 ++ NET_TYPE=phy 00:06:13.525 ++ RUN_NIGHTLY=0 00:06:13.525 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:13.525 + [[ -n '' ]] 00:06:13.525 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:13.525 + for M in /var/spdk/build-*-manifest.txt 00:06:13.525 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:13.525 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:13.525 + for M in /var/spdk/build-*-manifest.txt 00:06:13.525 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:13.525 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:13.525 + for M in /var/spdk/build-*-manifest.txt 00:06:13.525 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:13.525 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:13.525 ++ uname 00:06:13.525 + [[ Linux == \L\i\n\u\x ]] 00:06:13.525 + sudo dmesg -T 00:06:13.525 + sudo dmesg --clear 00:06:13.786 + dmesg_pid=1661671 00:06:13.786 + [[ Fedora Linux == FreeBSD ]] 00:06:13.786 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:13.786 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:13.786 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:13.786 + [[ -x /usr/src/fio-static/fio ]] 00:06:13.786 + export FIO_BIN=/usr/src/fio-static/fio 00:06:13.786 + FIO_BIN=/usr/src/fio-static/fio 00:06:13.786 + sudo dmesg -Tw 00:06:13.786 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:13.786 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:13.786 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:13.786 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:13.786 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:13.786 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:13.786 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:13.786 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:13.786 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:13.786 08:03:18 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:13.786 08:03:18 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:13.786 08:03:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:13.786 08:03:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:06:13.786 08:03:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:06:13.786 08:03:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:13.786 08:03:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:06:13.786 08:03:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:06:13.786 08:03:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:06:13.786 08:03:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:06:13.786 08:03:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:06:13.786 08:03:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:13.786 08:03:18 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:13.786 08:03:18 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:13.786 08:03:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.786 08:03:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:13.786 08:03:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:13.786 08:03:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.786 08:03:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.786 08:03:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.786 08:03:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.786 08:03:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.786 08:03:18 -- paths/export.sh@5 -- $ export PATH 00:06:13.786 08:03:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.786 08:03:18 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:13.786 08:03:18 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:13.786 08:03:18 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732086198.XXXXXX 00:06:13.786 08:03:18 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732086198.ENQu5j 00:06:13.786 08:03:18 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:13.786 08:03:18 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:13.786 08:03:18 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:06:13.786 08:03:18 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:06:13.787 08:03:18 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:06:13.787 08:03:18 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:13.787 08:03:18 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:13.787 08:03:18 -- common/autotest_common.sh@10 -- $ set +x 00:06:13.787 08:03:18 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:06:13.787 08:03:18 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:13.787 08:03:18 -- pm/common@17 -- $ local monitor 00:06:13.787 08:03:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:13.787 08:03:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:13.787 08:03:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:13.787 08:03:18 -- pm/common@21 -- $ date +%s 00:06:13.787 08:03:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:13.787 08:03:18 -- pm/common@25 -- $ sleep 1 00:06:13.787 08:03:18 -- pm/common@21 -- $ date +%s 00:06:13.787 08:03:18 -- pm/common@21 -- $ date +%s 00:06:13.787 08:03:18 -- pm/common@21 -- $ date +%s 00:06:13.787 08:03:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732086198 00:06:13.787 08:03:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732086198 00:06:13.787 08:03:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732086198 00:06:13.787 08:03:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732086198 00:06:14.047 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732086198_collect-cpu-load.pm.log 00:06:14.047 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732086198_collect-vmstat.pm.log 00:06:14.047 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732086198_collect-cpu-temp.pm.log 00:06:14.047 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732086198_collect-bmc-pm.bmc.pm.log 00:06:14.988 08:03:19 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:14.988 08:03:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:14.988 08:03:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:14.988 08:03:19 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:14.988 08:03:19 -- spdk/autobuild.sh@16 -- $ date -u 00:06:14.988 Wed Nov 20 07:03:19 AM UTC 2024 00:06:14.988 08:03:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:14.988 v25.01-pre-205-gc788bae60 00:06:14.988 08:03:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:14.988 08:03:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:14.988 08:03:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:14.988 08:03:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:14.988 08:03:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:14.988 08:03:19 -- common/autotest_common.sh@10 -- $ set +x 00:06:14.988 ************************************ 00:06:14.988 START TEST ubsan 00:06:14.988 ************************************ 00:06:14.988 08:03:19 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:14.988 using ubsan 00:06:14.988 00:06:14.988 real 0m0.001s 00:06:14.988 user 0m0.001s 00:06:14.988 sys 0m0.000s 00:06:14.988 08:03:19 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:14.988 08:03:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:14.988 ************************************ 00:06:14.988 END TEST ubsan 00:06:14.988 ************************************ 00:06:14.988 08:03:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:14.988 08:03:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:14.988 08:03:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:14.988 08:03:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:14.988 08:03:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:14.988 08:03:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:14.988 08:03:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:14.988 08:03:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:14.988 08:03:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:06:15.248 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:15.248 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:15.509 Using 'verbs' RDMA provider 00:06:31.366 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:06:43.597 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:06:43.597 Creating mk/config.mk...done. 00:06:43.597 Creating mk/cc.flags.mk...done. 00:06:43.597 Type 'make' to build. 00:06:43.597 08:03:48 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:06:43.597 08:03:48 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:43.597 08:03:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:43.597 08:03:48 -- common/autotest_common.sh@10 -- $ set +x 00:06:43.597 ************************************ 00:06:43.597 START TEST make 00:06:43.597 ************************************ 00:06:43.597 08:03:48 make -- common/autotest_common.sh@1129 -- $ make -j144 00:06:44.169 make[1]: Nothing to be done for 'all'. 00:06:45.548 The Meson build system 00:06:45.548 Version: 1.5.0 00:06:45.548 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:06:45.548 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:45.548 Build type: native build 00:06:45.548 Project name: libvfio-user 00:06:45.548 Project version: 0.0.1 00:06:45.548 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:45.548 C linker for the host machine: cc ld.bfd 2.40-14 00:06:45.548 Host machine cpu family: x86_64 00:06:45.548 Host machine cpu: x86_64 00:06:45.548 Run-time dependency threads found: YES 00:06:45.548 Library dl found: YES 00:06:45.548 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:45.548 Run-time dependency json-c found: YES 0.17 00:06:45.548 Run-time dependency cmocka found: YES 1.1.7 00:06:45.548 Program pytest-3 found: NO 00:06:45.548 Program flake8 found: NO 00:06:45.548 Program misspell-fixer found: NO 00:06:45.548 Program restructuredtext-lint found: NO 00:06:45.548 Program valgrind found: YES (/usr/bin/valgrind) 00:06:45.548 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:45.548 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:45.548 Compiler for C supports arguments -Wwrite-strings: YES 00:06:45.548 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:45.548 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:06:45.548 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:06:45.548 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:45.548 Build targets in project: 8 00:06:45.548 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:06:45.548 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:06:45.548 00:06:45.548 libvfio-user 0.0.1 00:06:45.548 00:06:45.548 User defined options 00:06:45.548 buildtype : debug 00:06:45.548 default_library: shared 00:06:45.548 libdir : /usr/local/lib 00:06:45.548 00:06:45.548 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:45.548 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:45.807 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:06:45.807 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:06:45.807 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:06:45.807 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:06:45.807 [5/37] Compiling C object samples/null.p/null.c.o 00:06:45.807 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:06:45.807 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:06:45.807 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:06:45.807 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:06:45.807 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:06:45.807 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:06:45.807 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:06:45.807 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:06:45.807 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:06:45.807 [15/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:06:45.807 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:06:45.807 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:06:45.807 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:06:45.807 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:06:45.807 [20/37] Compiling C object samples/server.p/server.c.o 00:06:45.807 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:06:45.807 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:06:45.807 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:06:45.807 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:06:45.807 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:06:45.807 [26/37] Compiling C object samples/client.p/client.c.o 00:06:45.807 [27/37] Linking target samples/client 00:06:46.067 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:06:46.067 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:06:46.067 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:06:46.067 [31/37] Linking target test/unit_tests 00:06:46.067 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:06:46.067 [33/37] Linking target samples/server 00:06:46.067 [34/37] Linking target samples/lspci 00:06:46.067 [35/37] Linking target samples/shadow_ioeventfd_server 00:06:46.067 [36/37] Linking target samples/null 00:06:46.067 [37/37] Linking target samples/gpio-pci-idio-16 00:06:46.067 INFO: autodetecting backend as ninja 00:06:46.067 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:46.328 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:46.589 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:46.589 ninja: no work to do. 00:06:53.181 The Meson build system 00:06:53.181 Version: 1.5.0 00:06:53.181 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:06:53.181 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:06:53.181 Build type: native build 00:06:53.181 Program cat found: YES (/usr/bin/cat) 00:06:53.181 Project name: DPDK 00:06:53.181 Project version: 24.03.0 00:06:53.181 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:53.181 C linker for the host machine: cc ld.bfd 2.40-14 00:06:53.181 Host machine cpu family: x86_64 00:06:53.181 Host machine cpu: x86_64 00:06:53.181 Message: ## Building in Developer Mode ## 00:06:53.181 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:53.181 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:06:53.181 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:53.181 Program python3 found: YES (/usr/bin/python3) 00:06:53.181 Program cat found: YES (/usr/bin/cat) 00:06:53.181 Compiler for C supports arguments -march=native: YES 00:06:53.181 Checking for size of "void *" : 8 00:06:53.181 Checking for size of "void *" : 8 (cached) 00:06:53.181 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:53.181 Library m found: YES 00:06:53.181 Library numa found: YES 00:06:53.181 Has header "numaif.h" : YES 00:06:53.181 Library fdt found: NO 00:06:53.181 Library execinfo found: NO 00:06:53.181 Has header "execinfo.h" : YES 00:06:53.181 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:53.181 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:53.181 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:53.181 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:53.181 Run-time dependency openssl found: YES 3.1.1 00:06:53.181 Run-time dependency libpcap found: YES 1.10.4 00:06:53.181 Has header "pcap.h" with dependency libpcap: YES 00:06:53.181 Compiler for C supports arguments -Wcast-qual: YES 00:06:53.181 Compiler for C supports arguments -Wdeprecated: YES 00:06:53.181 Compiler for C supports arguments -Wformat: YES 00:06:53.181 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:53.181 Compiler for C supports arguments -Wformat-security: NO 00:06:53.181 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:53.181 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:53.181 Compiler for C supports arguments -Wnested-externs: YES 00:06:53.181 Compiler for C supports arguments -Wold-style-definition: YES 00:06:53.181 Compiler for C supports arguments -Wpointer-arith: YES 00:06:53.181 Compiler for C supports arguments -Wsign-compare: YES 00:06:53.182 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:53.182 Compiler for C supports arguments -Wundef: YES 00:06:53.182 Compiler for C supports arguments -Wwrite-strings: YES 00:06:53.182 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:53.182 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:53.182 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:53.182 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:53.182 Program objdump found: YES (/usr/bin/objdump) 00:06:53.182 Compiler for C supports arguments -mavx512f: YES 00:06:53.182 Checking if "AVX512 checking" compiles: YES 00:06:53.182 Fetching value of define "__SSE4_2__" : 1 00:06:53.182 Fetching value of define "__AES__" : 1 00:06:53.182 Fetching value of define "__AVX__" : 1 00:06:53.182 Fetching value of define "__AVX2__" : 1 00:06:53.182 Fetching value of define "__AVX512BW__" : 1 00:06:53.182 Fetching value of define "__AVX512CD__" : 1 00:06:53.182 Fetching value of define "__AVX512DQ__" : 1 00:06:53.182 Fetching value of define "__AVX512F__" : 1 00:06:53.182 Fetching value of define "__AVX512VL__" : 1 00:06:53.182 Fetching value of define "__PCLMUL__" : 1 00:06:53.182 Fetching value of define "__RDRND__" : 1 00:06:53.182 Fetching value of define "__RDSEED__" : 1 00:06:53.182 Fetching value of define "__VPCLMULQDQ__" : 1 00:06:53.182 Fetching value of define "__znver1__" : (undefined) 00:06:53.182 Fetching value of define "__znver2__" : (undefined) 00:06:53.182 Fetching value of define "__znver3__" : (undefined) 00:06:53.182 Fetching value of define "__znver4__" : (undefined) 00:06:53.182 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:53.182 Message: lib/log: Defining dependency "log" 00:06:53.182 Message: lib/kvargs: Defining dependency "kvargs" 00:06:53.182 Message: lib/telemetry: Defining dependency "telemetry" 00:06:53.182 Checking for function "getentropy" : NO 00:06:53.182 Message: lib/eal: Defining dependency "eal" 00:06:53.182 Message: lib/ring: Defining dependency "ring" 00:06:53.182 Message: lib/rcu: Defining dependency "rcu" 00:06:53.182 Message: lib/mempool: Defining dependency "mempool" 00:06:53.182 Message: lib/mbuf: Defining dependency "mbuf" 00:06:53.182 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:53.182 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:53.182 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:53.182 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:53.182 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:53.182 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:06:53.182 Compiler for C supports arguments -mpclmul: YES 00:06:53.182 Compiler for C supports arguments -maes: YES 00:06:53.182 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:53.182 Compiler for C supports arguments -mavx512bw: YES 00:06:53.182 Compiler for C supports arguments -mavx512dq: YES 00:06:53.182 Compiler for C supports arguments -mavx512vl: YES 00:06:53.182 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:53.182 Compiler for C supports arguments -mavx2: YES 00:06:53.182 Compiler for C supports arguments -mavx: YES 00:06:53.182 Message: lib/net: Defining dependency "net" 00:06:53.182 Message: lib/meter: Defining dependency "meter" 00:06:53.182 Message: lib/ethdev: Defining dependency "ethdev" 00:06:53.182 Message: lib/pci: Defining dependency "pci" 00:06:53.182 Message: lib/cmdline: Defining dependency "cmdline" 00:06:53.182 Message: lib/hash: Defining dependency "hash" 00:06:53.182 Message: lib/timer: Defining dependency "timer" 00:06:53.182 Message: lib/compressdev: Defining dependency "compressdev" 00:06:53.182 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:53.182 Message: lib/dmadev: Defining dependency "dmadev" 00:06:53.182 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:53.182 Message: lib/power: Defining dependency "power" 00:06:53.182 Message: lib/reorder: Defining dependency "reorder" 00:06:53.182 Message: lib/security: Defining dependency "security" 00:06:53.182 Has header "linux/userfaultfd.h" : YES 00:06:53.182 Has header "linux/vduse.h" : YES 00:06:53.182 Message: lib/vhost: Defining dependency "vhost" 00:06:53.182 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:53.182 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:53.182 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:53.182 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:53.182 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:53.182 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:53.182 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:53.182 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:53.182 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:53.182 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:53.182 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:53.182 Configuring doxy-api-html.conf using configuration 00:06:53.182 Configuring doxy-api-man.conf using configuration 00:06:53.182 Program mandb found: YES (/usr/bin/mandb) 00:06:53.182 Program sphinx-build found: NO 00:06:53.182 Configuring rte_build_config.h using configuration 00:06:53.182 Message: 00:06:53.182 ================= 00:06:53.182 Applications Enabled 00:06:53.182 ================= 00:06:53.182 00:06:53.182 apps: 00:06:53.182 00:06:53.182 00:06:53.182 Message: 00:06:53.182 ================= 00:06:53.182 Libraries Enabled 00:06:53.182 ================= 00:06:53.182 00:06:53.182 libs: 00:06:53.182 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:53.182 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:53.182 cryptodev, dmadev, power, reorder, security, vhost, 00:06:53.182 00:06:53.182 Message: 00:06:53.182 =============== 00:06:53.182 Drivers Enabled 00:06:53.182 =============== 00:06:53.182 00:06:53.182 common: 00:06:53.182 00:06:53.182 bus: 00:06:53.182 pci, vdev, 00:06:53.182 mempool: 00:06:53.182 ring, 00:06:53.182 dma: 00:06:53.182 00:06:53.182 net: 00:06:53.182 00:06:53.182 crypto: 00:06:53.182 00:06:53.182 compress: 00:06:53.182 00:06:53.182 vdpa: 00:06:53.182 00:06:53.182 00:06:53.182 Message: 00:06:53.182 ================= 00:06:53.182 Content Skipped 00:06:53.182 ================= 00:06:53.182 00:06:53.182 apps: 00:06:53.182 dumpcap: explicitly disabled via build config 00:06:53.182 graph: explicitly disabled via build config 00:06:53.182 pdump: explicitly disabled via build config 00:06:53.182 proc-info: explicitly disabled via build config 00:06:53.182 test-acl: explicitly disabled via build config 00:06:53.182 test-bbdev: explicitly disabled via build config 00:06:53.182 test-cmdline: explicitly disabled via build config 00:06:53.182 test-compress-perf: explicitly disabled via build config 00:06:53.182 test-crypto-perf: explicitly disabled via build config 00:06:53.182 test-dma-perf: explicitly disabled via build config 00:06:53.182 test-eventdev: explicitly disabled via build config 00:06:53.182 test-fib: explicitly disabled via build config 00:06:53.182 test-flow-perf: explicitly disabled via build config 00:06:53.182 test-gpudev: explicitly disabled via build config 00:06:53.182 test-mldev: explicitly disabled via build config 00:06:53.182 test-pipeline: explicitly disabled via build config 00:06:53.182 test-pmd: explicitly disabled via build config 00:06:53.182 test-regex: explicitly disabled via build config 00:06:53.182 test-sad: explicitly disabled via build config 00:06:53.182 test-security-perf: explicitly disabled via build config 00:06:53.182 00:06:53.182 libs: 00:06:53.182 argparse: explicitly disabled via build config 00:06:53.182 metrics: explicitly disabled via build config 00:06:53.182 acl: explicitly disabled via build config 00:06:53.182 bbdev: explicitly disabled via build config 00:06:53.182 bitratestats: explicitly disabled via build config 00:06:53.182 bpf: explicitly disabled via build config 00:06:53.182 cfgfile: explicitly disabled via build config 00:06:53.182 distributor: explicitly disabled via build config 00:06:53.182 efd: explicitly disabled via build config 00:06:53.182 eventdev: explicitly disabled via build config 00:06:53.182 dispatcher: explicitly disabled via build config 00:06:53.182 gpudev: explicitly disabled via build config 00:06:53.182 gro: explicitly disabled via build config 00:06:53.182 gso: explicitly disabled via build config 00:06:53.182 ip_frag: explicitly disabled via build config 00:06:53.182 jobstats: explicitly disabled via build config 00:06:53.182 latencystats: explicitly disabled via build config 00:06:53.182 lpm: explicitly disabled via build config 00:06:53.182 member: explicitly disabled via build config 00:06:53.182 pcapng: explicitly disabled via build config 00:06:53.182 rawdev: explicitly disabled via build config 00:06:53.182 regexdev: explicitly disabled via build config 00:06:53.182 mldev: explicitly disabled via build config 00:06:53.182 rib: explicitly disabled via build config 00:06:53.182 sched: explicitly disabled via build config 00:06:53.182 stack: explicitly disabled via build config 00:06:53.182 ipsec: explicitly disabled via build config 00:06:53.182 pdcp: explicitly disabled via build config 00:06:53.182 fib: explicitly disabled via build config 00:06:53.182 port: explicitly disabled via build config 00:06:53.182 pdump: explicitly disabled via build config 00:06:53.182 table: explicitly disabled via build config 00:06:53.182 pipeline: explicitly disabled via build config 00:06:53.182 graph: explicitly disabled via build config 00:06:53.182 node: explicitly disabled via build config 00:06:53.182 00:06:53.182 drivers: 00:06:53.182 common/cpt: not in enabled drivers build config 00:06:53.182 common/dpaax: not in enabled drivers build config 00:06:53.182 common/iavf: not in enabled drivers build config 00:06:53.182 common/idpf: not in enabled drivers build config 00:06:53.182 common/ionic: not in enabled drivers build config 00:06:53.182 common/mvep: not in enabled drivers build config 00:06:53.182 common/octeontx: not in enabled drivers build config 00:06:53.182 bus/auxiliary: not in enabled drivers build config 00:06:53.183 bus/cdx: not in enabled drivers build config 00:06:53.183 bus/dpaa: not in enabled drivers build config 00:06:53.183 bus/fslmc: not in enabled drivers build config 00:06:53.183 bus/ifpga: not in enabled drivers build config 00:06:53.183 bus/platform: not in enabled drivers build config 00:06:53.183 bus/uacce: not in enabled drivers build config 00:06:53.183 bus/vmbus: not in enabled drivers build config 00:06:53.183 common/cnxk: not in enabled drivers build config 00:06:53.183 common/mlx5: not in enabled drivers build config 00:06:53.183 common/nfp: not in enabled drivers build config 00:06:53.183 common/nitrox: not in enabled drivers build config 00:06:53.183 common/qat: not in enabled drivers build config 00:06:53.183 common/sfc_efx: not in enabled drivers build config 00:06:53.183 mempool/bucket: not in enabled drivers build config 00:06:53.183 mempool/cnxk: not in enabled drivers build config 00:06:53.183 mempool/dpaa: not in enabled drivers build config 00:06:53.183 mempool/dpaa2: not in enabled drivers build config 00:06:53.183 mempool/octeontx: not in enabled drivers build config 00:06:53.183 mempool/stack: not in enabled drivers build config 00:06:53.183 dma/cnxk: not in enabled drivers build config 00:06:53.183 dma/dpaa: not in enabled drivers build config 00:06:53.183 dma/dpaa2: not in enabled drivers build config 00:06:53.183 dma/hisilicon: not in enabled drivers build config 00:06:53.183 dma/idxd: not in enabled drivers build config 00:06:53.183 dma/ioat: not in enabled drivers build config 00:06:53.183 dma/skeleton: not in enabled drivers build config 00:06:53.183 net/af_packet: not in enabled drivers build config 00:06:53.183 net/af_xdp: not in enabled drivers build config 00:06:53.183 net/ark: not in enabled drivers build config 00:06:53.183 net/atlantic: not in enabled drivers build config 00:06:53.183 net/avp: not in enabled drivers build config 00:06:53.183 net/axgbe: not in enabled drivers build config 00:06:53.183 net/bnx2x: not in enabled drivers build config 00:06:53.183 net/bnxt: not in enabled drivers build config 00:06:53.183 net/bonding: not in enabled drivers build config 00:06:53.183 net/cnxk: not in enabled drivers build config 00:06:53.183 net/cpfl: not in enabled drivers build config 00:06:53.183 net/cxgbe: not in enabled drivers build config 00:06:53.183 net/dpaa: not in enabled drivers build config 00:06:53.183 net/dpaa2: not in enabled drivers build config 00:06:53.183 net/e1000: not in enabled drivers build config 00:06:53.183 net/ena: not in enabled drivers build config 00:06:53.183 net/enetc: not in enabled drivers build config 00:06:53.183 net/enetfec: not in enabled drivers build config 00:06:53.183 net/enic: not in enabled drivers build config 00:06:53.183 net/failsafe: not in enabled drivers build config 00:06:53.183 net/fm10k: not in enabled drivers build config 00:06:53.183 net/gve: not in enabled drivers build config 00:06:53.183 net/hinic: not in enabled drivers build config 00:06:53.183 net/hns3: not in enabled drivers build config 00:06:53.183 net/i40e: not in enabled drivers build config 00:06:53.183 net/iavf: not in enabled drivers build config 00:06:53.183 net/ice: not in enabled drivers build config 00:06:53.183 net/idpf: not in enabled drivers build config 00:06:53.183 net/igc: not in enabled drivers build config 00:06:53.183 net/ionic: not in enabled drivers build config 00:06:53.183 net/ipn3ke: not in enabled drivers build config 00:06:53.183 net/ixgbe: not in enabled drivers build config 00:06:53.183 net/mana: not in enabled drivers build config 00:06:53.183 net/memif: not in enabled drivers build config 00:06:53.183 net/mlx4: not in enabled drivers build config 00:06:53.183 net/mlx5: not in enabled drivers build config 00:06:53.183 net/mvneta: not in enabled drivers build config 00:06:53.183 net/mvpp2: not in enabled drivers build config 00:06:53.183 net/netvsc: not in enabled drivers build config 00:06:53.183 net/nfb: not in enabled drivers build config 00:06:53.183 net/nfp: not in enabled drivers build config 00:06:53.183 net/ngbe: not in enabled drivers build config 00:06:53.183 net/null: not in enabled drivers build config 00:06:53.183 net/octeontx: not in enabled drivers build config 00:06:53.183 net/octeon_ep: not in enabled drivers build config 00:06:53.183 net/pcap: not in enabled drivers build config 00:06:53.183 net/pfe: not in enabled drivers build config 00:06:53.183 net/qede: not in enabled drivers build config 00:06:53.183 net/ring: not in enabled drivers build config 00:06:53.183 net/sfc: not in enabled drivers build config 00:06:53.183 net/softnic: not in enabled drivers build config 00:06:53.183 net/tap: not in enabled drivers build config 00:06:53.183 net/thunderx: not in enabled drivers build config 00:06:53.183 net/txgbe: not in enabled drivers build config 00:06:53.183 net/vdev_netvsc: not in enabled drivers build config 00:06:53.183 net/vhost: not in enabled drivers build config 00:06:53.183 net/virtio: not in enabled drivers build config 00:06:53.183 net/vmxnet3: not in enabled drivers build config 00:06:53.183 raw/*: missing internal dependency, "rawdev" 00:06:53.183 crypto/armv8: not in enabled drivers build config 00:06:53.183 crypto/bcmfs: not in enabled drivers build config 00:06:53.183 crypto/caam_jr: not in enabled drivers build config 00:06:53.183 crypto/ccp: not in enabled drivers build config 00:06:53.183 crypto/cnxk: not in enabled drivers build config 00:06:53.183 crypto/dpaa_sec: not in enabled drivers build config 00:06:53.183 crypto/dpaa2_sec: not in enabled drivers build config 00:06:53.183 crypto/ipsec_mb: not in enabled drivers build config 00:06:53.183 crypto/mlx5: not in enabled drivers build config 00:06:53.183 crypto/mvsam: not in enabled drivers build config 00:06:53.183 crypto/nitrox: not in enabled drivers build config 00:06:53.183 crypto/null: not in enabled drivers build config 00:06:53.183 crypto/octeontx: not in enabled drivers build config 00:06:53.183 crypto/openssl: not in enabled drivers build config 00:06:53.183 crypto/scheduler: not in enabled drivers build config 00:06:53.183 crypto/uadk: not in enabled drivers build config 00:06:53.183 crypto/virtio: not in enabled drivers build config 00:06:53.183 compress/isal: not in enabled drivers build config 00:06:53.183 compress/mlx5: not in enabled drivers build config 00:06:53.183 compress/nitrox: not in enabled drivers build config 00:06:53.183 compress/octeontx: not in enabled drivers build config 00:06:53.183 compress/zlib: not in enabled drivers build config 00:06:53.183 regex/*: missing internal dependency, "regexdev" 00:06:53.183 ml/*: missing internal dependency, "mldev" 00:06:53.183 vdpa/ifc: not in enabled drivers build config 00:06:53.183 vdpa/mlx5: not in enabled drivers build config 00:06:53.183 vdpa/nfp: not in enabled drivers build config 00:06:53.183 vdpa/sfc: not in enabled drivers build config 00:06:53.183 event/*: missing internal dependency, "eventdev" 00:06:53.183 baseband/*: missing internal dependency, "bbdev" 00:06:53.183 gpu/*: missing internal dependency, "gpudev" 00:06:53.183 00:06:53.183 00:06:53.183 Build targets in project: 84 00:06:53.183 00:06:53.183 DPDK 24.03.0 00:06:53.183 00:06:53.183 User defined options 00:06:53.183 buildtype : debug 00:06:53.183 default_library : shared 00:06:53.183 libdir : lib 00:06:53.183 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:53.183 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:53.183 c_link_args : 00:06:53.183 cpu_instruction_set: native 00:06:53.183 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:06:53.183 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:06:53.183 enable_docs : false 00:06:53.183 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:06:53.183 enable_kmods : false 00:06:53.183 max_lcores : 128 00:06:53.183 tests : false 00:06:53.183 00:06:53.183 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:53.183 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:06:53.183 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:53.183 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:53.183 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:53.183 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:53.183 [5/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:53.183 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:53.183 [7/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:53.183 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:53.183 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:53.183 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:53.183 [11/267] Linking static target lib/librte_kvargs.a 00:06:53.183 [12/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:53.442 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:53.442 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:53.442 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:53.442 [16/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:53.442 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:53.442 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:53.442 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:53.442 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:53.442 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:53.442 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:53.442 [23/267] Linking static target lib/librte_log.a 00:06:53.442 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:53.442 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:53.442 [26/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:53.442 [27/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:53.442 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:53.442 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:53.442 [30/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:53.442 [31/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:53.442 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:53.442 [33/267] Linking static target lib/librte_pci.a 00:06:53.442 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:53.442 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:53.442 [36/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:53.442 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:53.701 [38/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:53.701 [39/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:53.701 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:53.701 [41/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:53.701 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:53.701 [43/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:53.701 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:53.701 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:53.701 [46/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:53.701 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:53.701 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:53.701 [49/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:53.701 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:53.701 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:53.701 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:53.701 [53/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:53.701 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:53.701 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:53.701 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:53.701 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:53.701 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:53.701 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:53.701 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:53.701 [61/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:53.701 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:53.701 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:53.701 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:53.701 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:53.701 [66/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:53.701 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:53.701 [68/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:53.701 [69/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:53.701 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:53.701 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:53.701 [72/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:53.701 [73/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:53.701 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:53.701 [75/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:53.701 [76/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:53.701 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:53.701 [78/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:53.701 [79/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:53.701 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:53.701 [81/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:53.701 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:53.701 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:53.701 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:53.701 [85/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:53.701 [86/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:53.961 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:53.961 [88/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:53.961 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:53.961 [90/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:53.961 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:53.961 [92/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:53.961 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:53.961 [94/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:53.961 [95/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:53.961 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:53.961 [97/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:53.961 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:53.961 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:53.961 [100/267] Linking static target lib/librte_meter.a 00:06:53.961 [101/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:53.961 [102/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:53.961 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:06:53.961 [104/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:53.961 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:53.961 [106/267] Linking static target lib/librte_telemetry.a 00:06:53.961 [107/267] Linking static target lib/librte_compressdev.a 00:06:53.961 [108/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:53.961 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:53.961 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:53.961 [111/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:53.961 [112/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:53.961 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:53.961 [114/267] Linking static target lib/librte_ring.a 00:06:53.962 [115/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:53.962 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:53.962 [117/267] Linking static target lib/librte_mempool.a 00:06:53.962 [118/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:53.962 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:53.962 [120/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:53.962 [121/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:53.962 [122/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:53.962 [123/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:53.962 [124/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:53.962 [125/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:53.962 [126/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:53.962 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:53.962 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:53.962 [129/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:53.962 [130/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:53.962 [131/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:53.962 [132/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:53.962 [133/267] Linking static target lib/librte_timer.a 00:06:53.962 [134/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:53.962 [135/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:53.962 [136/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:53.962 [137/267] Linking static target lib/librte_cmdline.a 00:06:53.962 [138/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:53.962 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:53.962 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:53.962 [141/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:53.962 [142/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:53.962 [143/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:53.962 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:53.962 [145/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:53.962 [146/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:53.962 [147/267] Linking static target lib/librte_power.a 00:06:53.962 [148/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:53.962 [149/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:53.962 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:53.962 [151/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:53.962 [152/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:53.962 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:53.962 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:53.962 [155/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:53.962 [156/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:53.962 [157/267] Linking static target lib/librte_dmadev.a 00:06:53.962 [158/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:53.962 [159/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:53.962 [160/267] Linking static target lib/librte_reorder.a 00:06:53.962 [161/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:53.962 [162/267] Linking static target lib/librte_rcu.a 00:06:53.962 [163/267] Linking static target lib/librte_net.a 00:06:53.962 [164/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:53.962 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:53.962 [166/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:53.962 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:53.962 [168/267] Linking target lib/librte_log.so.24.1 00:06:53.962 [169/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:53.962 [170/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:53.962 [171/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:53.962 [172/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:53.962 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:54.223 [174/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:54.223 [175/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:54.223 [176/267] Linking static target lib/librte_eal.a 00:06:54.223 [177/267] Linking static target lib/librte_security.a 00:06:54.223 [178/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:54.223 [179/267] Linking static target lib/librte_mbuf.a 00:06:54.223 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:54.223 [181/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:54.223 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:54.223 [183/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:54.223 [184/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:54.223 [185/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:54.223 [186/267] Linking static target drivers/librte_bus_vdev.a 00:06:54.223 [187/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:54.223 [188/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:54.223 [189/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.223 [190/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:54.223 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:54.223 [192/267] Linking static target lib/librte_hash.a 00:06:54.223 [193/267] Linking target lib/librte_kvargs.so.24.1 00:06:54.223 [194/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.223 [195/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:54.223 [196/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:54.223 [197/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:54.223 [198/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:54.223 [199/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:54.223 [200/267] Linking static target lib/librte_cryptodev.a 00:06:54.223 [201/267] Linking static target drivers/librte_bus_pci.a 00:06:54.223 [202/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:54.223 [203/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:54.505 [204/267] Linking static target drivers/librte_mempool_ring.a 00:06:54.505 [205/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:54.505 [206/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.505 [207/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.505 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:54.505 [209/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.505 [210/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.506 [211/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.506 [212/267] Linking target lib/librte_telemetry.so.24.1 00:06:54.506 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.795 [214/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.795 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:54.795 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.795 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.795 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:55.076 [219/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.076 [220/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:55.076 [221/267] Linking static target lib/librte_ethdev.a 00:06:55.076 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.076 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.076 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.336 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.336 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.907 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:55.907 [228/267] Linking static target lib/librte_vhost.a 00:06:56.480 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:58.398 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:04.992 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.567 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.567 [233/267] Linking target lib/librte_eal.so.24.1 00:07:05.828 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:05.828 [235/267] Linking target lib/librte_ring.so.24.1 00:07:05.828 [236/267] Linking target lib/librte_meter.so.24.1 00:07:05.828 [237/267] Linking target lib/librte_timer.so.24.1 00:07:05.828 [238/267] Linking target lib/librte_pci.so.24.1 00:07:05.828 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:07:05.828 [240/267] Linking target lib/librte_dmadev.so.24.1 00:07:05.828 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:05.828 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:05.828 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:05.828 [244/267] Linking target lib/librte_rcu.so.24.1 00:07:05.828 [245/267] Linking target lib/librte_mempool.so.24.1 00:07:05.828 [246/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:05.828 [247/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:05.828 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:07:06.089 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:06.089 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:06.089 [251/267] Linking target lib/librte_mbuf.so.24.1 00:07:06.089 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:07:06.089 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:06.350 [254/267] Linking target lib/librte_reorder.so.24.1 00:07:06.350 [255/267] Linking target lib/librte_compressdev.so.24.1 00:07:06.350 [256/267] Linking target lib/librte_net.so.24.1 00:07:06.350 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:07:06.350 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:06.350 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:06.350 [260/267] Linking target lib/librte_security.so.24.1 00:07:06.350 [261/267] Linking target lib/librte_hash.so.24.1 00:07:06.350 [262/267] Linking target lib/librte_cmdline.so.24.1 00:07:06.350 [263/267] Linking target lib/librte_ethdev.so.24.1 00:07:06.612 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:06.612 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:06.612 [266/267] Linking target lib/librte_power.so.24.1 00:07:06.612 [267/267] Linking target lib/librte_vhost.so.24.1 00:07:06.612 INFO: autodetecting backend as ninja 00:07:06.612 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:07:11.909 CC lib/ut_mock/mock.o 00:07:11.909 CC lib/ut/ut.o 00:07:11.909 CC lib/log/log.o 00:07:11.909 CC lib/log/log_flags.o 00:07:11.909 CC lib/log/log_deprecated.o 00:07:11.910 LIB libspdk_ut.a 00:07:11.910 LIB libspdk_ut_mock.a 00:07:11.910 SO libspdk_ut.so.2.0 00:07:11.910 LIB libspdk_log.a 00:07:11.910 SO libspdk_ut_mock.so.6.0 00:07:11.910 SO libspdk_log.so.7.1 00:07:11.910 SYMLINK libspdk_ut.so 00:07:11.910 SYMLINK libspdk_ut_mock.so 00:07:11.910 SYMLINK libspdk_log.so 00:07:11.910 CC lib/ioat/ioat.o 00:07:11.910 CC lib/dma/dma.o 00:07:11.910 CC lib/util/base64.o 00:07:11.910 CC lib/util/bit_array.o 00:07:11.910 CC lib/util/crc32.o 00:07:11.910 CC lib/util/cpuset.o 00:07:11.910 CC lib/util/crc16.o 00:07:11.910 CXX lib/trace_parser/trace.o 00:07:11.910 CC lib/util/crc32c.o 00:07:11.910 CC lib/util/crc32_ieee.o 00:07:11.910 CC lib/util/crc64.o 00:07:11.910 CC lib/util/dif.o 00:07:11.910 CC lib/util/fd.o 00:07:11.910 CC lib/util/fd_group.o 00:07:11.910 CC lib/util/file.o 00:07:11.910 CC lib/util/hexlify.o 00:07:11.910 CC lib/util/iov.o 00:07:11.910 CC lib/util/math.o 00:07:11.910 CC lib/util/net.o 00:07:11.910 CC lib/util/strerror_tls.o 00:07:11.910 CC lib/util/pipe.o 00:07:11.910 CC lib/util/string.o 00:07:11.910 CC lib/util/uuid.o 00:07:11.910 CC lib/util/xor.o 00:07:11.910 CC lib/util/zipf.o 00:07:11.910 CC lib/util/md5.o 00:07:12.170 CC lib/vfio_user/host/vfio_user_pci.o 00:07:12.170 CC lib/vfio_user/host/vfio_user.o 00:07:12.170 LIB libspdk_dma.a 00:07:12.170 SO libspdk_dma.so.5.0 00:07:12.170 LIB libspdk_ioat.a 00:07:12.432 SO libspdk_ioat.so.7.0 00:07:12.432 SYMLINK libspdk_dma.so 00:07:12.432 SYMLINK libspdk_ioat.so 00:07:12.432 LIB libspdk_vfio_user.a 00:07:12.432 SO libspdk_vfio_user.so.5.0 00:07:12.432 LIB libspdk_util.a 00:07:12.432 SYMLINK libspdk_vfio_user.so 00:07:12.692 SO libspdk_util.so.10.1 00:07:12.692 SYMLINK libspdk_util.so 00:07:12.953 LIB libspdk_trace_parser.a 00:07:12.953 SO libspdk_trace_parser.so.6.0 00:07:12.953 SYMLINK libspdk_trace_parser.so 00:07:12.953 CC lib/idxd/idxd.o 00:07:12.953 CC lib/idxd/idxd_user.o 00:07:12.953 CC lib/json/json_parse.o 00:07:12.953 CC lib/idxd/idxd_kernel.o 00:07:12.953 CC lib/json/json_util.o 00:07:12.953 CC lib/json/json_write.o 00:07:13.214 CC lib/env_dpdk/env.o 00:07:13.214 CC lib/env_dpdk/memory.o 00:07:13.214 CC lib/env_dpdk/pci.o 00:07:13.214 CC lib/env_dpdk/init.o 00:07:13.214 CC lib/env_dpdk/threads.o 00:07:13.214 CC lib/rdma_utils/rdma_utils.o 00:07:13.214 CC lib/env_dpdk/pci_ioat.o 00:07:13.214 CC lib/env_dpdk/pci_virtio.o 00:07:13.214 CC lib/env_dpdk/pci_vmd.o 00:07:13.214 CC lib/env_dpdk/pci_idxd.o 00:07:13.214 CC lib/conf/conf.o 00:07:13.214 CC lib/vmd/vmd.o 00:07:13.214 CC lib/env_dpdk/pci_event.o 00:07:13.214 CC lib/env_dpdk/sigbus_handler.o 00:07:13.214 CC lib/vmd/led.o 00:07:13.214 CC lib/env_dpdk/pci_dpdk.o 00:07:13.214 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:13.214 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:13.474 LIB libspdk_conf.a 00:07:13.474 LIB libspdk_rdma_utils.a 00:07:13.474 LIB libspdk_json.a 00:07:13.474 SO libspdk_conf.so.6.0 00:07:13.474 SO libspdk_rdma_utils.so.1.0 00:07:13.474 SO libspdk_json.so.6.0 00:07:13.474 SYMLINK libspdk_conf.so 00:07:13.474 SYMLINK libspdk_rdma_utils.so 00:07:13.474 SYMLINK libspdk_json.so 00:07:13.474 LIB libspdk_idxd.a 00:07:13.474 SO libspdk_idxd.so.12.1 00:07:13.735 SYMLINK libspdk_idxd.so 00:07:13.735 LIB libspdk_vmd.a 00:07:13.735 SO libspdk_vmd.so.6.0 00:07:13.735 SYMLINK libspdk_vmd.so 00:07:13.735 CC lib/rdma_provider/common.o 00:07:13.735 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:13.735 CC lib/jsonrpc/jsonrpc_server.o 00:07:13.735 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:13.735 CC lib/jsonrpc/jsonrpc_client.o 00:07:13.735 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:13.997 LIB libspdk_rdma_provider.a 00:07:13.997 SO libspdk_rdma_provider.so.7.0 00:07:13.997 LIB libspdk_jsonrpc.a 00:07:14.259 SYMLINK libspdk_rdma_provider.so 00:07:14.259 SO libspdk_jsonrpc.so.6.0 00:07:14.259 SYMLINK libspdk_jsonrpc.so 00:07:14.259 LIB libspdk_env_dpdk.a 00:07:14.520 SO libspdk_env_dpdk.so.15.1 00:07:14.520 SYMLINK libspdk_env_dpdk.so 00:07:14.520 CC lib/rpc/rpc.o 00:07:14.781 LIB libspdk_rpc.a 00:07:14.781 SO libspdk_rpc.so.6.0 00:07:14.781 SYMLINK libspdk_rpc.so 00:07:15.352 CC lib/notify/notify.o 00:07:15.352 CC lib/notify/notify_rpc.o 00:07:15.352 CC lib/trace/trace.o 00:07:15.352 CC lib/trace/trace_flags.o 00:07:15.352 CC lib/trace/trace_rpc.o 00:07:15.352 CC lib/keyring/keyring.o 00:07:15.352 CC lib/keyring/keyring_rpc.o 00:07:15.352 LIB libspdk_notify.a 00:07:15.352 SO libspdk_notify.so.6.0 00:07:15.614 LIB libspdk_trace.a 00:07:15.614 LIB libspdk_keyring.a 00:07:15.614 SYMLINK libspdk_notify.so 00:07:15.614 SO libspdk_trace.so.11.0 00:07:15.614 SO libspdk_keyring.so.2.0 00:07:15.614 SYMLINK libspdk_trace.so 00:07:15.614 SYMLINK libspdk_keyring.so 00:07:15.875 CC lib/thread/thread.o 00:07:15.875 CC lib/thread/iobuf.o 00:07:15.875 CC lib/sock/sock.o 00:07:15.875 CC lib/sock/sock_rpc.o 00:07:16.448 LIB libspdk_sock.a 00:07:16.448 SO libspdk_sock.so.10.0 00:07:16.448 SYMLINK libspdk_sock.so 00:07:16.710 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:16.710 CC lib/nvme/nvme_ctrlr.o 00:07:16.710 CC lib/nvme/nvme_fabric.o 00:07:16.710 CC lib/nvme/nvme_pcie_common.o 00:07:16.710 CC lib/nvme/nvme_ns_cmd.o 00:07:16.710 CC lib/nvme/nvme_ns.o 00:07:16.710 CC lib/nvme/nvme_pcie.o 00:07:16.710 CC lib/nvme/nvme_qpair.o 00:07:16.710 CC lib/nvme/nvme.o 00:07:16.710 CC lib/nvme/nvme_quirks.o 00:07:16.710 CC lib/nvme/nvme_transport.o 00:07:16.710 CC lib/nvme/nvme_discovery.o 00:07:16.710 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:16.710 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:16.710 CC lib/nvme/nvme_tcp.o 00:07:16.710 CC lib/nvme/nvme_opal.o 00:07:16.710 CC lib/nvme/nvme_io_msg.o 00:07:16.710 CC lib/nvme/nvme_poll_group.o 00:07:16.710 CC lib/nvme/nvme_zns.o 00:07:16.710 CC lib/nvme/nvme_stubs.o 00:07:16.710 CC lib/nvme/nvme_auth.o 00:07:16.710 CC lib/nvme/nvme_cuse.o 00:07:16.710 CC lib/nvme/nvme_vfio_user.o 00:07:16.710 CC lib/nvme/nvme_rdma.o 00:07:17.280 LIB libspdk_thread.a 00:07:17.280 SO libspdk_thread.so.11.0 00:07:17.280 SYMLINK libspdk_thread.so 00:07:17.541 CC lib/vfu_tgt/tgt_endpoint.o 00:07:17.541 CC lib/vfu_tgt/tgt_rpc.o 00:07:17.801 CC lib/accel/accel.o 00:07:17.801 CC lib/accel/accel_rpc.o 00:07:17.801 CC lib/accel/accel_sw.o 00:07:17.801 CC lib/virtio/virtio_vhost_user.o 00:07:17.801 CC lib/fsdev/fsdev.o 00:07:17.801 CC lib/virtio/virtio.o 00:07:17.801 CC lib/fsdev/fsdev_io.o 00:07:17.801 CC lib/virtio/virtio_pci.o 00:07:17.801 CC lib/fsdev/fsdev_rpc.o 00:07:17.801 CC lib/blob/blobstore.o 00:07:17.801 CC lib/virtio/virtio_vfio_user.o 00:07:17.801 CC lib/blob/request.o 00:07:17.801 CC lib/blob/blob_bs_dev.o 00:07:17.801 CC lib/blob/zeroes.o 00:07:17.801 CC lib/init/json_config.o 00:07:17.801 CC lib/init/subsystem.o 00:07:17.801 CC lib/init/subsystem_rpc.o 00:07:17.801 CC lib/init/rpc.o 00:07:17.801 LIB libspdk_init.a 00:07:18.062 SO libspdk_init.so.6.0 00:07:18.062 LIB libspdk_vfu_tgt.a 00:07:18.062 LIB libspdk_virtio.a 00:07:18.062 SO libspdk_vfu_tgt.so.3.0 00:07:18.062 SO libspdk_virtio.so.7.0 00:07:18.062 SYMLINK libspdk_init.so 00:07:18.062 SYMLINK libspdk_vfu_tgt.so 00:07:18.062 SYMLINK libspdk_virtio.so 00:07:18.062 LIB libspdk_nvme.a 00:07:18.343 LIB libspdk_fsdev.a 00:07:18.343 SO libspdk_nvme.so.15.0 00:07:18.343 SO libspdk_fsdev.so.2.0 00:07:18.343 SYMLINK libspdk_fsdev.so 00:07:18.343 CC lib/event/app.o 00:07:18.343 CC lib/event/reactor.o 00:07:18.343 CC lib/event/log_rpc.o 00:07:18.343 CC lib/event/app_rpc.o 00:07:18.343 CC lib/event/scheduler_static.o 00:07:18.604 SYMLINK libspdk_nvme.so 00:07:18.604 LIB libspdk_accel.a 00:07:18.604 SO libspdk_accel.so.16.0 00:07:18.604 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:18.864 SYMLINK libspdk_accel.so 00:07:18.864 LIB libspdk_event.a 00:07:18.864 SO libspdk_event.so.14.0 00:07:18.864 SYMLINK libspdk_event.so 00:07:19.125 CC lib/bdev/bdev.o 00:07:19.125 CC lib/bdev/bdev_zone.o 00:07:19.125 CC lib/bdev/bdev_rpc.o 00:07:19.125 CC lib/bdev/part.o 00:07:19.125 CC lib/bdev/scsi_nvme.o 00:07:19.385 LIB libspdk_fuse_dispatcher.a 00:07:19.385 SO libspdk_fuse_dispatcher.so.1.0 00:07:19.385 SYMLINK libspdk_fuse_dispatcher.so 00:07:20.329 LIB libspdk_blob.a 00:07:20.329 SO libspdk_blob.so.11.0 00:07:20.329 SYMLINK libspdk_blob.so 00:07:20.900 CC lib/blobfs/blobfs.o 00:07:20.900 CC lib/blobfs/tree.o 00:07:20.900 CC lib/lvol/lvol.o 00:07:21.474 LIB libspdk_bdev.a 00:07:21.474 SO libspdk_bdev.so.17.0 00:07:21.474 LIB libspdk_blobfs.a 00:07:21.474 SO libspdk_blobfs.so.10.0 00:07:21.474 SYMLINK libspdk_bdev.so 00:07:21.474 LIB libspdk_lvol.a 00:07:21.474 SYMLINK libspdk_blobfs.so 00:07:21.474 SO libspdk_lvol.so.10.0 00:07:21.735 SYMLINK libspdk_lvol.so 00:07:21.995 CC lib/nvmf/ctrlr.o 00:07:21.995 CC lib/nvmf/ctrlr_discovery.o 00:07:21.995 CC lib/nvmf/ctrlr_bdev.o 00:07:21.995 CC lib/nvmf/subsystem.o 00:07:21.995 CC lib/nvmf/nvmf.o 00:07:21.995 CC lib/nvmf/nvmf_rpc.o 00:07:21.995 CC lib/nvmf/transport.o 00:07:21.995 CC lib/nvmf/tcp.o 00:07:21.995 CC lib/nvmf/stubs.o 00:07:21.995 CC lib/nvmf/mdns_server.o 00:07:21.995 CC lib/nvmf/vfio_user.o 00:07:21.995 CC lib/nvmf/rdma.o 00:07:21.995 CC lib/nvmf/auth.o 00:07:21.995 CC lib/scsi/dev.o 00:07:21.995 CC lib/scsi/lun.o 00:07:21.995 CC lib/scsi/port.o 00:07:21.995 CC lib/scsi/scsi.o 00:07:21.995 CC lib/scsi/scsi_pr.o 00:07:21.995 CC lib/ftl/ftl_core.o 00:07:21.995 CC lib/scsi/scsi_bdev.o 00:07:21.995 CC lib/ftl/ftl_init.o 00:07:21.995 CC lib/scsi/scsi_rpc.o 00:07:21.995 CC lib/ftl/ftl_layout.o 00:07:21.995 CC lib/nbd/nbd.o 00:07:21.995 CC lib/ublk/ublk.o 00:07:21.995 CC lib/ftl/ftl_debug.o 00:07:21.995 CC lib/nbd/nbd_rpc.o 00:07:21.995 CC lib/ublk/ublk_rpc.o 00:07:21.995 CC lib/scsi/task.o 00:07:21.995 CC lib/ftl/ftl_io.o 00:07:21.995 CC lib/ftl/ftl_sb.o 00:07:21.995 CC lib/ftl/ftl_l2p.o 00:07:21.995 CC lib/ftl/ftl_l2p_flat.o 00:07:21.995 CC lib/ftl/ftl_nv_cache.o 00:07:21.995 CC lib/ftl/ftl_band.o 00:07:21.995 CC lib/ftl/ftl_band_ops.o 00:07:21.995 CC lib/ftl/ftl_writer.o 00:07:21.995 CC lib/ftl/ftl_rq.o 00:07:21.995 CC lib/ftl/ftl_reloc.o 00:07:21.995 CC lib/ftl/ftl_l2p_cache.o 00:07:21.995 CC lib/ftl/ftl_p2l.o 00:07:21.995 CC lib/ftl/ftl_p2l_log.o 00:07:21.995 CC lib/ftl/mngt/ftl_mngt.o 00:07:21.995 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:21.995 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:21.995 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:21.995 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:21.995 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:21.995 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:21.995 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:21.995 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:21.995 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:21.995 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:21.995 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:21.995 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:21.995 CC lib/ftl/utils/ftl_conf.o 00:07:21.995 CC lib/ftl/utils/ftl_mempool.o 00:07:21.995 CC lib/ftl/utils/ftl_md.o 00:07:21.995 CC lib/ftl/utils/ftl_bitmap.o 00:07:21.995 CC lib/ftl/utils/ftl_property.o 00:07:21.995 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:21.995 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:21.995 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:21.995 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:21.995 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:21.995 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:21.995 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:21.995 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:21.995 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:21.995 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:21.995 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:21.995 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:21.995 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:21.995 CC lib/ftl/base/ftl_base_bdev.o 00:07:21.995 CC lib/ftl/ftl_trace.o 00:07:21.995 CC lib/ftl/base/ftl_base_dev.o 00:07:22.567 LIB libspdk_nbd.a 00:07:22.567 SO libspdk_nbd.so.7.0 00:07:22.567 LIB libspdk_scsi.a 00:07:22.567 SO libspdk_scsi.so.9.0 00:07:22.567 SYMLINK libspdk_nbd.so 00:07:22.567 SYMLINK libspdk_scsi.so 00:07:22.567 LIB libspdk_ublk.a 00:07:22.567 SO libspdk_ublk.so.3.0 00:07:22.829 SYMLINK libspdk_ublk.so 00:07:22.829 LIB libspdk_ftl.a 00:07:22.829 CC lib/iscsi/conn.o 00:07:22.829 CC lib/iscsi/init_grp.o 00:07:22.829 CC lib/iscsi/iscsi.o 00:07:22.829 CC lib/iscsi/param.o 00:07:22.829 CC lib/iscsi/portal_grp.o 00:07:22.829 CC lib/iscsi/iscsi_rpc.o 00:07:22.829 CC lib/iscsi/tgt_node.o 00:07:22.829 CC lib/iscsi/iscsi_subsystem.o 00:07:22.829 CC lib/iscsi/task.o 00:07:22.829 CC lib/vhost/vhost.o 00:07:22.829 CC lib/vhost/vhost_rpc.o 00:07:22.829 CC lib/vhost/vhost_scsi.o 00:07:22.829 CC lib/vhost/vhost_blk.o 00:07:22.829 CC lib/vhost/rte_vhost_user.o 00:07:23.089 SO libspdk_ftl.so.9.0 00:07:23.351 SYMLINK libspdk_ftl.so 00:07:23.922 LIB libspdk_nvmf.a 00:07:23.922 SO libspdk_nvmf.so.20.0 00:07:23.922 LIB libspdk_vhost.a 00:07:23.922 SO libspdk_vhost.so.8.0 00:07:24.183 SYMLINK libspdk_vhost.so 00:07:24.183 SYMLINK libspdk_nvmf.so 00:07:24.183 LIB libspdk_iscsi.a 00:07:24.183 SO libspdk_iscsi.so.8.0 00:07:24.445 SYMLINK libspdk_iscsi.so 00:07:25.017 CC module/vfu_device/vfu_virtio.o 00:07:25.017 CC module/vfu_device/vfu_virtio_blk.o 00:07:25.017 CC module/vfu_device/vfu_virtio_scsi.o 00:07:25.017 CC module/env_dpdk/env_dpdk_rpc.o 00:07:25.017 CC module/vfu_device/vfu_virtio_rpc.o 00:07:25.017 CC module/vfu_device/vfu_virtio_fs.o 00:07:25.017 CC module/blob/bdev/blob_bdev.o 00:07:25.017 CC module/accel/error/accel_error.o 00:07:25.017 CC module/accel/ioat/accel_ioat.o 00:07:25.017 LIB libspdk_env_dpdk_rpc.a 00:07:25.017 CC module/accel/error/accel_error_rpc.o 00:07:25.017 CC module/accel/ioat/accel_ioat_rpc.o 00:07:25.017 CC module/accel/iaa/accel_iaa.o 00:07:25.017 CC module/accel/iaa/accel_iaa_rpc.o 00:07:25.017 CC module/accel/dsa/accel_dsa.o 00:07:25.017 CC module/accel/dsa/accel_dsa_rpc.o 00:07:25.017 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:25.017 CC module/sock/posix/posix.o 00:07:25.017 CC module/keyring/file/keyring.o 00:07:25.017 CC module/keyring/linux/keyring.o 00:07:25.017 CC module/keyring/file/keyring_rpc.o 00:07:25.017 CC module/keyring/linux/keyring_rpc.o 00:07:25.017 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:25.017 CC module/scheduler/gscheduler/gscheduler.o 00:07:25.017 CC module/fsdev/aio/fsdev_aio.o 00:07:25.017 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:25.017 CC module/fsdev/aio/linux_aio_mgr.o 00:07:25.017 SO libspdk_env_dpdk_rpc.so.6.0 00:07:25.278 SYMLINK libspdk_env_dpdk_rpc.so 00:07:25.278 LIB libspdk_keyring_linux.a 00:07:25.278 LIB libspdk_scheduler_dpdk_governor.a 00:07:25.278 LIB libspdk_keyring_file.a 00:07:25.278 LIB libspdk_accel_ioat.a 00:07:25.278 LIB libspdk_scheduler_gscheduler.a 00:07:25.278 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:25.278 SO libspdk_accel_ioat.so.6.0 00:07:25.278 SO libspdk_keyring_linux.so.1.0 00:07:25.278 LIB libspdk_accel_error.a 00:07:25.278 LIB libspdk_accel_iaa.a 00:07:25.278 SO libspdk_keyring_file.so.2.0 00:07:25.278 SO libspdk_scheduler_gscheduler.so.4.0 00:07:25.278 LIB libspdk_scheduler_dynamic.a 00:07:25.278 SO libspdk_accel_error.so.2.0 00:07:25.278 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:25.278 SYMLINK libspdk_accel_ioat.so 00:07:25.278 SO libspdk_accel_iaa.so.3.0 00:07:25.278 LIB libspdk_blob_bdev.a 00:07:25.278 SO libspdk_scheduler_dynamic.so.4.0 00:07:25.278 SYMLINK libspdk_keyring_linux.so 00:07:25.278 SYMLINK libspdk_scheduler_gscheduler.so 00:07:25.278 LIB libspdk_accel_dsa.a 00:07:25.278 SYMLINK libspdk_keyring_file.so 00:07:25.278 SO libspdk_blob_bdev.so.11.0 00:07:25.278 SYMLINK libspdk_accel_error.so 00:07:25.278 SO libspdk_accel_dsa.so.5.0 00:07:25.278 SYMLINK libspdk_accel_iaa.so 00:07:25.278 SYMLINK libspdk_scheduler_dynamic.so 00:07:25.539 LIB libspdk_vfu_device.a 00:07:25.539 SYMLINK libspdk_blob_bdev.so 00:07:25.539 SYMLINK libspdk_accel_dsa.so 00:07:25.539 SO libspdk_vfu_device.so.3.0 00:07:25.539 LIB libspdk_sock_posix.a 00:07:25.539 SYMLINK libspdk_vfu_device.so 00:07:25.539 SO libspdk_sock_posix.so.6.0 00:07:25.798 LIB libspdk_fsdev_aio.a 00:07:25.798 SYMLINK libspdk_sock_posix.so 00:07:25.798 SO libspdk_fsdev_aio.so.1.0 00:07:25.798 SYMLINK libspdk_fsdev_aio.so 00:07:26.058 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:26.058 CC module/bdev/gpt/vbdev_gpt.o 00:07:26.058 CC module/bdev/gpt/gpt.o 00:07:26.058 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:26.058 CC module/bdev/null/bdev_null.o 00:07:26.058 CC module/bdev/null/bdev_null_rpc.o 00:07:26.058 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:26.058 CC module/bdev/delay/vbdev_delay.o 00:07:26.058 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:26.058 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:26.058 CC module/blobfs/bdev/blobfs_bdev.o 00:07:26.058 CC module/bdev/nvme/bdev_nvme.o 00:07:26.058 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:26.058 CC module/bdev/error/vbdev_error.o 00:07:26.058 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:26.058 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:26.058 CC module/bdev/nvme/nvme_rpc.o 00:07:26.058 CC module/bdev/error/vbdev_error_rpc.o 00:07:26.058 CC module/bdev/nvme/bdev_mdns_client.o 00:07:26.058 CC module/bdev/nvme/vbdev_opal.o 00:07:26.058 CC module/bdev/passthru/vbdev_passthru.o 00:07:26.058 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:26.058 CC module/bdev/raid/bdev_raid_sb.o 00:07:26.058 CC module/bdev/raid/bdev_raid.o 00:07:26.058 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:26.058 CC module/bdev/raid/bdev_raid_rpc.o 00:07:26.058 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:26.058 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:26.058 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:26.058 CC module/bdev/lvol/vbdev_lvol.o 00:07:26.058 CC module/bdev/iscsi/bdev_iscsi.o 00:07:26.058 CC module/bdev/raid/raid0.o 00:07:26.058 CC module/bdev/raid/raid1.o 00:07:26.058 CC module/bdev/split/vbdev_split.o 00:07:26.058 CC module/bdev/split/vbdev_split_rpc.o 00:07:26.058 CC module/bdev/raid/concat.o 00:07:26.058 CC module/bdev/aio/bdev_aio.o 00:07:26.058 CC module/bdev/malloc/bdev_malloc.o 00:07:26.058 CC module/bdev/aio/bdev_aio_rpc.o 00:07:26.058 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:26.058 CC module/bdev/ftl/bdev_ftl.o 00:07:26.058 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:26.318 LIB libspdk_blobfs_bdev.a 00:07:26.318 LIB libspdk_bdev_gpt.a 00:07:26.318 LIB libspdk_bdev_split.a 00:07:26.318 SO libspdk_blobfs_bdev.so.6.0 00:07:26.318 LIB libspdk_bdev_null.a 00:07:26.318 SO libspdk_bdev_gpt.so.6.0 00:07:26.318 SO libspdk_bdev_null.so.6.0 00:07:26.318 LIB libspdk_bdev_error.a 00:07:26.318 SO libspdk_bdev_split.so.6.0 00:07:26.318 SO libspdk_bdev_error.so.6.0 00:07:26.318 LIB libspdk_bdev_ftl.a 00:07:26.318 LIB libspdk_bdev_zone_block.a 00:07:26.318 LIB libspdk_bdev_passthru.a 00:07:26.318 SYMLINK libspdk_bdev_gpt.so 00:07:26.318 SYMLINK libspdk_blobfs_bdev.so 00:07:26.318 SO libspdk_bdev_ftl.so.6.0 00:07:26.318 SYMLINK libspdk_bdev_null.so 00:07:26.318 SYMLINK libspdk_bdev_split.so 00:07:26.318 SO libspdk_bdev_zone_block.so.6.0 00:07:26.318 SO libspdk_bdev_passthru.so.6.0 00:07:26.318 LIB libspdk_bdev_iscsi.a 00:07:26.318 LIB libspdk_bdev_aio.a 00:07:26.318 SYMLINK libspdk_bdev_error.so 00:07:26.318 LIB libspdk_bdev_delay.a 00:07:26.318 LIB libspdk_bdev_malloc.a 00:07:26.318 SO libspdk_bdev_iscsi.so.6.0 00:07:26.318 SO libspdk_bdev_aio.so.6.0 00:07:26.318 SYMLINK libspdk_bdev_ftl.so 00:07:26.318 SO libspdk_bdev_malloc.so.6.0 00:07:26.318 SO libspdk_bdev_delay.so.6.0 00:07:26.318 SYMLINK libspdk_bdev_zone_block.so 00:07:26.318 SYMLINK libspdk_bdev_passthru.so 00:07:26.579 SYMLINK libspdk_bdev_aio.so 00:07:26.579 SYMLINK libspdk_bdev_iscsi.so 00:07:26.579 SYMLINK libspdk_bdev_malloc.so 00:07:26.579 LIB libspdk_bdev_virtio.a 00:07:26.579 LIB libspdk_bdev_lvol.a 00:07:26.579 SYMLINK libspdk_bdev_delay.so 00:07:26.579 SO libspdk_bdev_virtio.so.6.0 00:07:26.579 SO libspdk_bdev_lvol.so.6.0 00:07:26.579 SYMLINK libspdk_bdev_virtio.so 00:07:26.579 SYMLINK libspdk_bdev_lvol.so 00:07:26.841 LIB libspdk_bdev_raid.a 00:07:26.841 SO libspdk_bdev_raid.so.6.0 00:07:27.101 SYMLINK libspdk_bdev_raid.so 00:07:28.042 LIB libspdk_bdev_nvme.a 00:07:28.302 SO libspdk_bdev_nvme.so.7.1 00:07:28.302 SYMLINK libspdk_bdev_nvme.so 00:07:29.243 CC module/event/subsystems/sock/sock.o 00:07:29.243 CC module/event/subsystems/iobuf/iobuf.o 00:07:29.243 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:29.243 CC module/event/subsystems/vmd/vmd.o 00:07:29.243 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:29.243 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:07:29.243 CC module/event/subsystems/keyring/keyring.o 00:07:29.243 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:29.243 CC module/event/subsystems/fsdev/fsdev.o 00:07:29.243 CC module/event/subsystems/scheduler/scheduler.o 00:07:29.243 LIB libspdk_event_sock.a 00:07:29.243 LIB libspdk_event_vmd.a 00:07:29.243 SO libspdk_event_sock.so.5.0 00:07:29.243 LIB libspdk_event_iobuf.a 00:07:29.243 LIB libspdk_event_vhost_blk.a 00:07:29.243 LIB libspdk_event_keyring.a 00:07:29.243 LIB libspdk_event_vfu_tgt.a 00:07:29.243 LIB libspdk_event_fsdev.a 00:07:29.243 SO libspdk_event_vmd.so.6.0 00:07:29.243 LIB libspdk_event_scheduler.a 00:07:29.243 SO libspdk_event_vhost_blk.so.3.0 00:07:29.243 SO libspdk_event_iobuf.so.3.0 00:07:29.243 SO libspdk_event_keyring.so.1.0 00:07:29.243 SO libspdk_event_fsdev.so.1.0 00:07:29.243 SO libspdk_event_vfu_tgt.so.3.0 00:07:29.243 SYMLINK libspdk_event_sock.so 00:07:29.243 SO libspdk_event_scheduler.so.4.0 00:07:29.243 SYMLINK libspdk_event_vhost_blk.so 00:07:29.243 SYMLINK libspdk_event_vmd.so 00:07:29.243 SYMLINK libspdk_event_iobuf.so 00:07:29.243 SYMLINK libspdk_event_fsdev.so 00:07:29.243 SYMLINK libspdk_event_keyring.so 00:07:29.243 SYMLINK libspdk_event_vfu_tgt.so 00:07:29.243 SYMLINK libspdk_event_scheduler.so 00:07:29.813 CC module/event/subsystems/accel/accel.o 00:07:29.813 LIB libspdk_event_accel.a 00:07:29.813 SO libspdk_event_accel.so.6.0 00:07:29.813 SYMLINK libspdk_event_accel.so 00:07:30.383 CC module/event/subsystems/bdev/bdev.o 00:07:30.383 LIB libspdk_event_bdev.a 00:07:30.383 SO libspdk_event_bdev.so.6.0 00:07:30.645 SYMLINK libspdk_event_bdev.so 00:07:30.956 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:30.956 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:30.956 CC module/event/subsystems/ublk/ublk.o 00:07:30.956 CC module/event/subsystems/scsi/scsi.o 00:07:30.956 CC module/event/subsystems/nbd/nbd.o 00:07:31.218 LIB libspdk_event_ublk.a 00:07:31.218 LIB libspdk_event_nbd.a 00:07:31.218 LIB libspdk_event_scsi.a 00:07:31.218 SO libspdk_event_ublk.so.3.0 00:07:31.218 SO libspdk_event_nbd.so.6.0 00:07:31.218 SO libspdk_event_scsi.so.6.0 00:07:31.218 LIB libspdk_event_nvmf.a 00:07:31.218 SYMLINK libspdk_event_ublk.so 00:07:31.218 SYMLINK libspdk_event_nbd.so 00:07:31.218 SYMLINK libspdk_event_scsi.so 00:07:31.218 SO libspdk_event_nvmf.so.6.0 00:07:31.218 SYMLINK libspdk_event_nvmf.so 00:07:31.480 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:31.480 CC module/event/subsystems/iscsi/iscsi.o 00:07:31.740 LIB libspdk_event_vhost_scsi.a 00:07:31.740 LIB libspdk_event_iscsi.a 00:07:31.740 SO libspdk_event_vhost_scsi.so.3.0 00:07:31.740 SO libspdk_event_iscsi.so.6.0 00:07:31.740 SYMLINK libspdk_event_vhost_scsi.so 00:07:32.001 SYMLINK libspdk_event_iscsi.so 00:07:32.001 SO libspdk.so.6.0 00:07:32.001 SYMLINK libspdk.so 00:07:32.573 CC test/rpc_client/rpc_client_test.o 00:07:32.573 CC app/trace_record/trace_record.o 00:07:32.573 TEST_HEADER include/spdk/accel.h 00:07:32.573 TEST_HEADER include/spdk/accel_module.h 00:07:32.573 TEST_HEADER include/spdk/assert.h 00:07:32.573 TEST_HEADER include/spdk/base64.h 00:07:32.573 TEST_HEADER include/spdk/barrier.h 00:07:32.574 CXX app/trace/trace.o 00:07:32.574 TEST_HEADER include/spdk/bdev.h 00:07:32.574 TEST_HEADER include/spdk/bdev_module.h 00:07:32.574 CC app/spdk_lspci/spdk_lspci.o 00:07:32.574 TEST_HEADER include/spdk/bdev_zone.h 00:07:32.574 TEST_HEADER include/spdk/bit_array.h 00:07:32.574 CC app/spdk_nvme_identify/identify.o 00:07:32.574 CC app/spdk_nvme_discover/discovery_aer.o 00:07:32.574 CC app/spdk_top/spdk_top.o 00:07:32.574 TEST_HEADER include/spdk/bit_pool.h 00:07:32.574 TEST_HEADER include/spdk/blob_bdev.h 00:07:32.574 CC app/spdk_nvme_perf/perf.o 00:07:32.574 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:32.574 TEST_HEADER include/spdk/blobfs.h 00:07:32.574 TEST_HEADER include/spdk/blob.h 00:07:32.574 TEST_HEADER include/spdk/conf.h 00:07:32.574 TEST_HEADER include/spdk/config.h 00:07:32.574 TEST_HEADER include/spdk/crc16.h 00:07:32.574 TEST_HEADER include/spdk/cpuset.h 00:07:32.574 TEST_HEADER include/spdk/crc32.h 00:07:32.574 TEST_HEADER include/spdk/crc64.h 00:07:32.574 TEST_HEADER include/spdk/dif.h 00:07:32.574 TEST_HEADER include/spdk/dma.h 00:07:32.574 TEST_HEADER include/spdk/env_dpdk.h 00:07:32.574 TEST_HEADER include/spdk/endian.h 00:07:32.574 TEST_HEADER include/spdk/env.h 00:07:32.574 TEST_HEADER include/spdk/event.h 00:07:32.574 TEST_HEADER include/spdk/fd_group.h 00:07:32.574 TEST_HEADER include/spdk/fd.h 00:07:32.574 TEST_HEADER include/spdk/file.h 00:07:32.574 TEST_HEADER include/spdk/fsdev.h 00:07:32.574 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:32.574 TEST_HEADER include/spdk/fsdev_module.h 00:07:32.574 TEST_HEADER include/spdk/ftl.h 00:07:32.574 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:32.574 CC app/spdk_dd/spdk_dd.o 00:07:32.574 TEST_HEADER include/spdk/gpt_spec.h 00:07:32.574 CC app/iscsi_tgt/iscsi_tgt.o 00:07:32.574 TEST_HEADER include/spdk/hexlify.h 00:07:32.574 TEST_HEADER include/spdk/idxd.h 00:07:32.574 TEST_HEADER include/spdk/histogram_data.h 00:07:32.574 CC app/nvmf_tgt/nvmf_main.o 00:07:32.574 TEST_HEADER include/spdk/idxd_spec.h 00:07:32.574 TEST_HEADER include/spdk/init.h 00:07:32.574 TEST_HEADER include/spdk/ioat.h 00:07:32.574 TEST_HEADER include/spdk/ioat_spec.h 00:07:32.574 TEST_HEADER include/spdk/json.h 00:07:32.574 TEST_HEADER include/spdk/iscsi_spec.h 00:07:32.574 TEST_HEADER include/spdk/keyring.h 00:07:32.574 TEST_HEADER include/spdk/jsonrpc.h 00:07:32.574 TEST_HEADER include/spdk/keyring_module.h 00:07:32.574 TEST_HEADER include/spdk/likely.h 00:07:32.574 TEST_HEADER include/spdk/log.h 00:07:32.574 TEST_HEADER include/spdk/lvol.h 00:07:32.574 TEST_HEADER include/spdk/md5.h 00:07:32.574 TEST_HEADER include/spdk/memory.h 00:07:32.574 TEST_HEADER include/spdk/mmio.h 00:07:32.574 TEST_HEADER include/spdk/nbd.h 00:07:32.574 TEST_HEADER include/spdk/notify.h 00:07:32.574 TEST_HEADER include/spdk/net.h 00:07:32.574 TEST_HEADER include/spdk/nvme_intel.h 00:07:32.574 TEST_HEADER include/spdk/nvme.h 00:07:32.574 TEST_HEADER include/spdk/nvme_spec.h 00:07:32.574 CC app/spdk_tgt/spdk_tgt.o 00:07:32.574 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:32.574 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:32.574 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:32.574 TEST_HEADER include/spdk/nvme_zns.h 00:07:32.574 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:32.574 TEST_HEADER include/spdk/nvmf.h 00:07:32.574 TEST_HEADER include/spdk/nvmf_spec.h 00:07:32.574 TEST_HEADER include/spdk/nvmf_transport.h 00:07:32.574 TEST_HEADER include/spdk/opal.h 00:07:32.574 TEST_HEADER include/spdk/pci_ids.h 00:07:32.574 TEST_HEADER include/spdk/opal_spec.h 00:07:32.574 TEST_HEADER include/spdk/pipe.h 00:07:32.574 TEST_HEADER include/spdk/reduce.h 00:07:32.574 TEST_HEADER include/spdk/queue.h 00:07:32.574 TEST_HEADER include/spdk/scsi.h 00:07:32.574 TEST_HEADER include/spdk/scheduler.h 00:07:32.574 TEST_HEADER include/spdk/rpc.h 00:07:32.574 TEST_HEADER include/spdk/scsi_spec.h 00:07:32.574 TEST_HEADER include/spdk/stdinc.h 00:07:32.574 TEST_HEADER include/spdk/string.h 00:07:32.574 TEST_HEADER include/spdk/sock.h 00:07:32.574 TEST_HEADER include/spdk/thread.h 00:07:32.574 TEST_HEADER include/spdk/trace.h 00:07:32.574 TEST_HEADER include/spdk/ublk.h 00:07:32.574 TEST_HEADER include/spdk/trace_parser.h 00:07:32.574 TEST_HEADER include/spdk/tree.h 00:07:32.574 TEST_HEADER include/spdk/uuid.h 00:07:32.574 TEST_HEADER include/spdk/util.h 00:07:32.574 TEST_HEADER include/spdk/version.h 00:07:32.574 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:32.574 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:32.574 TEST_HEADER include/spdk/vhost.h 00:07:32.574 TEST_HEADER include/spdk/vmd.h 00:07:32.574 TEST_HEADER include/spdk/xor.h 00:07:32.574 TEST_HEADER include/spdk/zipf.h 00:07:32.574 CXX test/cpp_headers/accel.o 00:07:32.574 CXX test/cpp_headers/accel_module.o 00:07:32.574 CXX test/cpp_headers/assert.o 00:07:32.574 CXX test/cpp_headers/base64.o 00:07:32.574 CXX test/cpp_headers/barrier.o 00:07:32.574 CXX test/cpp_headers/bdev.o 00:07:32.574 CXX test/cpp_headers/bdev_module.o 00:07:32.574 CXX test/cpp_headers/bit_pool.o 00:07:32.574 CXX test/cpp_headers/bdev_zone.o 00:07:32.574 CXX test/cpp_headers/bit_array.o 00:07:32.574 CXX test/cpp_headers/blob_bdev.o 00:07:32.574 CXX test/cpp_headers/blobfs_bdev.o 00:07:32.574 CXX test/cpp_headers/blobfs.o 00:07:32.574 CXX test/cpp_headers/config.o 00:07:32.574 CXX test/cpp_headers/blob.o 00:07:32.574 CXX test/cpp_headers/conf.o 00:07:32.574 CXX test/cpp_headers/cpuset.o 00:07:32.574 CXX test/cpp_headers/crc16.o 00:07:32.574 CXX test/cpp_headers/crc32.o 00:07:32.574 CXX test/cpp_headers/crc64.o 00:07:32.574 CXX test/cpp_headers/dif.o 00:07:32.574 CXX test/cpp_headers/dma.o 00:07:32.574 CXX test/cpp_headers/endian.o 00:07:32.574 CXX test/cpp_headers/env_dpdk.o 00:07:32.574 CXX test/cpp_headers/env.o 00:07:32.574 CXX test/cpp_headers/event.o 00:07:32.574 CXX test/cpp_headers/fd.o 00:07:32.574 CXX test/cpp_headers/fd_group.o 00:07:32.574 CXX test/cpp_headers/file.o 00:07:32.574 CXX test/cpp_headers/fsdev.o 00:07:32.574 CXX test/cpp_headers/fsdev_module.o 00:07:32.574 CXX test/cpp_headers/ftl.o 00:07:32.574 CXX test/cpp_headers/gpt_spec.o 00:07:32.574 CXX test/cpp_headers/fuse_dispatcher.o 00:07:32.574 CXX test/cpp_headers/hexlify.o 00:07:32.574 CXX test/cpp_headers/idxd_spec.o 00:07:32.574 CXX test/cpp_headers/idxd.o 00:07:32.574 CXX test/cpp_headers/histogram_data.o 00:07:32.574 CXX test/cpp_headers/init.o 00:07:32.574 CXX test/cpp_headers/ioat_spec.o 00:07:32.574 CXX test/cpp_headers/ioat.o 00:07:32.574 CXX test/cpp_headers/json.o 00:07:32.574 CXX test/cpp_headers/jsonrpc.o 00:07:32.574 CXX test/cpp_headers/iscsi_spec.o 00:07:32.574 CXX test/cpp_headers/log.o 00:07:32.574 CXX test/cpp_headers/keyring.o 00:07:32.574 CXX test/cpp_headers/keyring_module.o 00:07:32.574 CXX test/cpp_headers/md5.o 00:07:32.574 CXX test/cpp_headers/likely.o 00:07:32.574 CXX test/cpp_headers/lvol.o 00:07:32.574 CXX test/cpp_headers/mmio.o 00:07:32.574 CXX test/cpp_headers/nbd.o 00:07:32.574 CXX test/cpp_headers/memory.o 00:07:32.574 CXX test/cpp_headers/net.o 00:07:32.574 CXX test/cpp_headers/nvme.o 00:07:32.574 CXX test/cpp_headers/notify.o 00:07:32.574 CXX test/cpp_headers/nvme_ocssd.o 00:07:32.574 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:32.574 CXX test/cpp_headers/nvme_zns.o 00:07:32.574 CXX test/cpp_headers/nvme_spec.o 00:07:32.574 CXX test/cpp_headers/nvme_intel.o 00:07:32.574 CXX test/cpp_headers/nvmf_cmd.o 00:07:32.836 CXX test/cpp_headers/nvmf_transport.o 00:07:32.836 CXX test/cpp_headers/nvmf.o 00:07:32.836 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:32.836 CXX test/cpp_headers/opal.o 00:07:32.836 CXX test/cpp_headers/pci_ids.o 00:07:32.836 CXX test/cpp_headers/pipe.o 00:07:32.836 CXX test/cpp_headers/opal_spec.o 00:07:32.836 CXX test/cpp_headers/nvmf_spec.o 00:07:32.836 CXX test/cpp_headers/queue.o 00:07:32.836 CC test/env/vtophys/vtophys.o 00:07:32.836 CXX test/cpp_headers/reduce.o 00:07:32.836 CC test/app/jsoncat/jsoncat.o 00:07:32.836 CXX test/cpp_headers/scsi.o 00:07:32.836 CXX test/cpp_headers/rpc.o 00:07:32.836 CXX test/cpp_headers/scsi_spec.o 00:07:32.836 CXX test/cpp_headers/scheduler.o 00:07:32.836 CXX test/cpp_headers/sock.o 00:07:32.836 CXX test/cpp_headers/string.o 00:07:32.836 CXX test/cpp_headers/stdinc.o 00:07:32.836 CC examples/util/zipf/zipf.o 00:07:32.836 CC test/app/histogram_perf/histogram_perf.o 00:07:32.836 CC test/thread/poller_perf/poller_perf.o 00:07:32.836 CXX test/cpp_headers/trace_parser.o 00:07:32.836 CXX test/cpp_headers/thread.o 00:07:32.836 CXX test/cpp_headers/trace.o 00:07:32.836 CXX test/cpp_headers/tree.o 00:07:32.836 CXX test/cpp_headers/ublk.o 00:07:32.836 CXX test/cpp_headers/version.o 00:07:32.836 CXX test/cpp_headers/util.o 00:07:32.836 CXX test/cpp_headers/uuid.o 00:07:32.836 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:32.836 CXX test/cpp_headers/vfio_user_spec.o 00:07:32.836 CXX test/cpp_headers/vhost.o 00:07:32.836 CC test/app/stub/stub.o 00:07:32.836 CXX test/cpp_headers/vfio_user_pci.o 00:07:32.836 LINK spdk_lspci 00:07:32.836 CC examples/ioat/perf/perf.o 00:07:32.836 CXX test/cpp_headers/xor.o 00:07:32.836 CC examples/ioat/verify/verify.o 00:07:32.836 CXX test/cpp_headers/zipf.o 00:07:32.836 CXX test/cpp_headers/vmd.o 00:07:32.836 CC test/app/bdev_svc/bdev_svc.o 00:07:32.836 CC test/env/memory/memory_ut.o 00:07:32.836 CC test/env/pci/pci_ut.o 00:07:32.836 CC app/fio/nvme/fio_plugin.o 00:07:32.836 LINK rpc_client_test 00:07:32.836 CC test/dma/test_dma/test_dma.o 00:07:32.836 CC app/fio/bdev/fio_plugin.o 00:07:32.836 LINK interrupt_tgt 00:07:32.836 LINK nvmf_tgt 00:07:32.836 LINK spdk_nvme_discover 00:07:32.836 LINK spdk_trace_record 00:07:33.096 LINK iscsi_tgt 00:07:33.096 LINK histogram_perf 00:07:33.096 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:33.096 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:33.096 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:33.096 LINK spdk_tgt 00:07:33.096 CC test/env/mem_callbacks/mem_callbacks.o 00:07:33.096 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:33.096 LINK stub 00:07:33.096 LINK jsoncat 00:07:33.096 LINK bdev_svc 00:07:33.355 LINK poller_perf 00:07:33.355 LINK spdk_dd 00:07:33.355 LINK vtophys 00:07:33.355 LINK zipf 00:07:33.355 LINK env_dpdk_post_init 00:07:33.355 LINK verify 00:07:33.355 LINK spdk_trace 00:07:33.616 LINK ioat_perf 00:07:33.616 LINK pci_ut 00:07:33.616 CC test/event/reactor/reactor.o 00:07:33.616 LINK spdk_nvme 00:07:33.616 CC test/event/reactor_perf/reactor_perf.o 00:07:33.616 LINK nvme_fuzz 00:07:33.616 CC test/event/event_perf/event_perf.o 00:07:33.616 LINK vhost_fuzz 00:07:33.616 CC test/event/app_repeat/app_repeat.o 00:07:33.616 LINK spdk_nvme_identify 00:07:33.877 CC test/event/scheduler/scheduler.o 00:07:33.877 LINK test_dma 00:07:33.877 LINK spdk_bdev 00:07:33.877 LINK spdk_nvme_perf 00:07:33.877 CC examples/vmd/lsvmd/lsvmd.o 00:07:33.877 CC examples/sock/hello_world/hello_sock.o 00:07:33.877 CC examples/vmd/led/led.o 00:07:33.877 CC examples/idxd/perf/perf.o 00:07:33.877 CC examples/thread/thread/thread_ex.o 00:07:33.877 CC app/vhost/vhost.o 00:07:33.877 LINK spdk_top 00:07:33.877 LINK reactor 00:07:33.877 LINK reactor_perf 00:07:33.877 LINK mem_callbacks 00:07:33.877 LINK app_repeat 00:07:33.877 LINK event_perf 00:07:33.877 LINK lsvmd 00:07:33.877 LINK led 00:07:34.138 LINK scheduler 00:07:34.138 LINK hello_sock 00:07:34.138 LINK vhost 00:07:34.138 LINK thread 00:07:34.138 LINK idxd_perf 00:07:34.397 CC test/nvme/aer/aer.o 00:07:34.397 LINK memory_ut 00:07:34.397 CC test/nvme/cuse/cuse.o 00:07:34.397 CC test/nvme/connect_stress/connect_stress.o 00:07:34.397 CC test/nvme/err_injection/err_injection.o 00:07:34.397 CC test/nvme/reset/reset.o 00:07:34.397 CC test/nvme/sgl/sgl.o 00:07:34.397 CC test/accel/dif/dif.o 00:07:34.397 CC test/nvme/overhead/overhead.o 00:07:34.397 CC test/nvme/startup/startup.o 00:07:34.397 CC test/nvme/fused_ordering/fused_ordering.o 00:07:34.397 CC test/nvme/reserve/reserve.o 00:07:34.397 CC test/nvme/simple_copy/simple_copy.o 00:07:34.397 CC test/nvme/boot_partition/boot_partition.o 00:07:34.397 CC test/nvme/e2edp/nvme_dp.o 00:07:34.397 CC test/nvme/fdp/fdp.o 00:07:34.397 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:34.397 CC test/nvme/compliance/nvme_compliance.o 00:07:34.397 CC test/blobfs/mkfs/mkfs.o 00:07:34.666 CC test/lvol/esnap/esnap.o 00:07:34.666 LINK boot_partition 00:07:34.666 CC examples/nvme/arbitration/arbitration.o 00:07:34.666 LINK startup 00:07:34.666 LINK connect_stress 00:07:34.666 CC examples/nvme/hello_world/hello_world.o 00:07:34.666 LINK err_injection 00:07:34.666 LINK fused_ordering 00:07:34.666 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:34.666 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:34.666 CC examples/nvme/hotplug/hotplug.o 00:07:34.666 LINK reserve 00:07:34.666 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:34.666 CC examples/nvme/abort/abort.o 00:07:34.666 LINK doorbell_aers 00:07:34.666 CC examples/nvme/reconnect/reconnect.o 00:07:34.666 LINK aer 00:07:34.666 LINK simple_copy 00:07:34.666 LINK mkfs 00:07:34.666 LINK reset 00:07:34.666 LINK sgl 00:07:34.666 LINK nvme_dp 00:07:34.666 LINK overhead 00:07:34.666 CC examples/accel/perf/accel_perf.o 00:07:34.666 LINK nvme_compliance 00:07:34.666 LINK fdp 00:07:34.666 CC examples/blob/cli/blobcli.o 00:07:34.666 CC examples/blob/hello_world/hello_blob.o 00:07:34.666 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:34.666 LINK iscsi_fuzz 00:07:34.666 LINK pmr_persistence 00:07:34.666 LINK cmb_copy 00:07:34.927 LINK hello_world 00:07:34.927 LINK hotplug 00:07:34.927 LINK abort 00:07:34.927 LINK arbitration 00:07:34.927 LINK reconnect 00:07:34.927 LINK dif 00:07:34.927 LINK hello_blob 00:07:34.927 LINK nvme_manage 00:07:34.927 LINK hello_fsdev 00:07:35.189 LINK accel_perf 00:07:35.189 LINK blobcli 00:07:35.449 LINK cuse 00:07:35.449 CC test/bdev/bdevio/bdevio.o 00:07:35.710 CC examples/bdev/bdevperf/bdevperf.o 00:07:35.710 CC examples/bdev/hello_world/hello_bdev.o 00:07:35.971 LINK bdevio 00:07:35.971 LINK hello_bdev 00:07:36.232 LINK bdevperf 00:07:37.174 CC examples/nvmf/nvmf/nvmf.o 00:07:37.435 LINK nvmf 00:07:38.817 LINK esnap 00:07:39.387 00:07:39.387 real 0m55.545s 00:07:39.387 user 7m50.383s 00:07:39.387 sys 4m26.636s 00:07:39.387 08:04:43 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:39.387 08:04:43 make -- common/autotest_common.sh@10 -- $ set +x 00:07:39.387 ************************************ 00:07:39.387 END TEST make 00:07:39.387 ************************************ 00:07:39.387 08:04:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:39.388 08:04:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:39.388 08:04:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:39.388 08:04:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:39.388 08:04:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:39.388 08:04:43 -- pm/common@44 -- $ pid=1661713 00:07:39.388 08:04:43 -- pm/common@50 -- $ kill -TERM 1661713 00:07:39.388 08:04:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:39.388 08:04:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:39.388 08:04:43 -- pm/common@44 -- $ pid=1661715 00:07:39.388 08:04:43 -- pm/common@50 -- $ kill -TERM 1661715 00:07:39.388 08:04:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:39.388 08:04:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:39.388 08:04:43 -- pm/common@44 -- $ pid=1661716 00:07:39.388 08:04:43 -- pm/common@50 -- $ kill -TERM 1661716 00:07:39.388 08:04:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:39.388 08:04:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:39.388 08:04:43 -- pm/common@44 -- $ pid=1661739 00:07:39.388 08:04:43 -- pm/common@50 -- $ sudo -E kill -TERM 1661739 00:07:39.388 08:04:43 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:39.388 08:04:43 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:39.388 08:04:44 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.388 08:04:44 -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.388 08:04:44 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.388 08:04:44 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.388 08:04:44 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.388 08:04:44 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.388 08:04:44 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.388 08:04:44 -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.388 08:04:44 -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.388 08:04:44 -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.388 08:04:44 -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.388 08:04:44 -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.388 08:04:44 -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.388 08:04:44 -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.388 08:04:44 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.388 08:04:44 -- scripts/common.sh@344 -- # case "$op" in 00:07:39.388 08:04:44 -- scripts/common.sh@345 -- # : 1 00:07:39.388 08:04:44 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.388 08:04:44 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.388 08:04:44 -- scripts/common.sh@365 -- # decimal 1 00:07:39.388 08:04:44 -- scripts/common.sh@353 -- # local d=1 00:07:39.388 08:04:44 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.388 08:04:44 -- scripts/common.sh@355 -- # echo 1 00:07:39.388 08:04:44 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.388 08:04:44 -- scripts/common.sh@366 -- # decimal 2 00:07:39.388 08:04:44 -- scripts/common.sh@353 -- # local d=2 00:07:39.388 08:04:44 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.388 08:04:44 -- scripts/common.sh@355 -- # echo 2 00:07:39.649 08:04:44 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.649 08:04:44 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.649 08:04:44 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.649 08:04:44 -- scripts/common.sh@368 -- # return 0 00:07:39.649 08:04:44 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.649 08:04:44 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:39.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.649 --rc genhtml_branch_coverage=1 00:07:39.649 --rc genhtml_function_coverage=1 00:07:39.649 --rc genhtml_legend=1 00:07:39.649 --rc geninfo_all_blocks=1 00:07:39.649 --rc geninfo_unexecuted_blocks=1 00:07:39.649 00:07:39.649 ' 00:07:39.649 08:04:44 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:39.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.649 --rc genhtml_branch_coverage=1 00:07:39.649 --rc genhtml_function_coverage=1 00:07:39.649 --rc genhtml_legend=1 00:07:39.649 --rc geninfo_all_blocks=1 00:07:39.649 --rc geninfo_unexecuted_blocks=1 00:07:39.649 00:07:39.649 ' 00:07:39.649 08:04:44 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:39.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.649 --rc genhtml_branch_coverage=1 00:07:39.649 --rc genhtml_function_coverage=1 00:07:39.649 --rc genhtml_legend=1 00:07:39.649 --rc geninfo_all_blocks=1 00:07:39.649 --rc geninfo_unexecuted_blocks=1 00:07:39.649 00:07:39.649 ' 00:07:39.649 08:04:44 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:39.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.649 --rc genhtml_branch_coverage=1 00:07:39.649 --rc genhtml_function_coverage=1 00:07:39.649 --rc genhtml_legend=1 00:07:39.649 --rc geninfo_all_blocks=1 00:07:39.649 --rc geninfo_unexecuted_blocks=1 00:07:39.649 00:07:39.649 ' 00:07:39.649 08:04:44 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.649 08:04:44 -- nvmf/common.sh@7 -- # uname -s 00:07:39.649 08:04:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.649 08:04:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.649 08:04:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.649 08:04:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.649 08:04:44 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.649 08:04:44 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:39.650 08:04:44 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.650 08:04:44 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:39.650 08:04:44 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:39.650 08:04:44 -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:39.650 08:04:44 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.650 08:04:44 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:39.650 08:04:44 -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:39.650 08:04:44 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.650 08:04:44 -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.650 08:04:44 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:39.650 08:04:44 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.650 08:04:44 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.650 08:04:44 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.650 08:04:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.650 08:04:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.650 08:04:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.650 08:04:44 -- paths/export.sh@5 -- # export PATH 00:07:39.650 08:04:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.650 08:04:44 -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:07:39.650 08:04:44 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:39.650 08:04:44 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:39.650 08:04:44 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:39.650 08:04:44 -- nvmf/common.sh@50 -- # : 0 00:07:39.650 08:04:44 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:39.650 08:04:44 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:39.650 08:04:44 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:39.650 08:04:44 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.650 08:04:44 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.650 08:04:44 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:39.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:39.650 08:04:44 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:39.650 08:04:44 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:39.650 08:04:44 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:39.650 08:04:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:39.650 08:04:44 -- spdk/autotest.sh@32 -- # uname -s 00:07:39.650 08:04:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:39.650 08:04:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:39.650 08:04:44 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:39.650 08:04:44 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:07:39.650 08:04:44 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:39.650 08:04:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:39.650 08:04:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:39.650 08:04:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:39.650 08:04:44 -- spdk/autotest.sh@48 -- # udevadm_pid=1727263 00:07:39.650 08:04:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:39.650 08:04:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:39.650 08:04:44 -- pm/common@17 -- # local monitor 00:07:39.650 08:04:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:39.650 08:04:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:39.650 08:04:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:39.650 08:04:44 -- pm/common@21 -- # date +%s 00:07:39.650 08:04:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:39.650 08:04:44 -- pm/common@21 -- # date +%s 00:07:39.650 08:04:44 -- pm/common@25 -- # sleep 1 00:07:39.650 08:04:44 -- pm/common@21 -- # date +%s 00:07:39.650 08:04:44 -- pm/common@21 -- # date +%s 00:07:39.650 08:04:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732086284 00:07:39.650 08:04:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732086284 00:07:39.650 08:04:44 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732086284 00:07:39.650 08:04:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732086284 00:07:39.650 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732086284_collect-vmstat.pm.log 00:07:39.650 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732086284_collect-cpu-load.pm.log 00:07:39.650 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732086284_collect-cpu-temp.pm.log 00:07:39.650 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732086284_collect-bmc-pm.bmc.pm.log 00:07:40.592 08:04:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:40.592 08:04:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:40.592 08:04:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:40.592 08:04:45 -- common/autotest_common.sh@10 -- # set +x 00:07:40.592 08:04:45 -- spdk/autotest.sh@59 -- # create_test_list 00:07:40.592 08:04:45 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:40.592 08:04:45 -- common/autotest_common.sh@10 -- # set +x 00:07:40.592 08:04:45 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:07:40.592 08:04:45 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:40.592 08:04:45 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:40.592 08:04:45 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:40.592 08:04:45 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:40.592 08:04:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:40.592 08:04:45 -- common/autotest_common.sh@1457 -- # uname 00:07:40.592 08:04:45 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:40.592 08:04:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:40.592 08:04:45 -- common/autotest_common.sh@1477 -- # uname 00:07:40.592 08:04:45 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:40.592 08:04:45 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:40.592 08:04:45 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:40.853 lcov: LCOV version 1.15 00:07:40.853 08:04:45 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:07:55.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:55.902 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:08:10.816 08:05:15 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:10.816 08:05:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:10.816 08:05:15 -- common/autotest_common.sh@10 -- # set +x 00:08:10.816 08:05:15 -- spdk/autotest.sh@78 -- # rm -f 00:08:10.816 08:05:15 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:15.025 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:08:15.025 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:08:15.025 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:08:15.025 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:08:15.025 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:08:15.025 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:08:15.025 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:08:15.025 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:08:15.025 0000:65:00.0 (144d a80a): Already using the nvme driver 00:08:15.025 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:08:15.025 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:08:15.025 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:08:15.025 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:08:15.025 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:08:15.025 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:08:15.025 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:08:15.025 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:08:15.025 08:05:19 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:15.025 08:05:19 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:15.025 08:05:19 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:15.025 08:05:19 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:15.025 08:05:19 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:15.026 08:05:19 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:15.026 08:05:19 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:15.026 08:05:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:15.026 08:05:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:15.026 08:05:19 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:15.026 08:05:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:15.026 08:05:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:15.026 08:05:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:15.026 08:05:19 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:15.026 08:05:19 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:15.287 No valid GPT data, bailing 00:08:15.287 08:05:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:15.287 08:05:19 -- scripts/common.sh@394 -- # pt= 00:08:15.287 08:05:19 -- scripts/common.sh@395 -- # return 1 00:08:15.287 08:05:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:15.287 1+0 records in 00:08:15.287 1+0 records out 00:08:15.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00529738 s, 198 MB/s 00:08:15.287 08:05:19 -- spdk/autotest.sh@105 -- # sync 00:08:15.287 08:05:19 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:15.287 08:05:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:15.287 08:05:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:25.289 08:05:28 -- spdk/autotest.sh@111 -- # uname -s 00:08:25.289 08:05:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:25.289 08:05:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:25.289 08:05:28 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:08:27.827 Hugepages 00:08:27.827 node hugesize free / total 00:08:27.827 node0 1048576kB 0 / 0 00:08:27.827 node0 2048kB 0 / 0 00:08:27.827 node1 1048576kB 0 / 0 00:08:27.827 node1 2048kB 0 / 0 00:08:27.827 00:08:27.827 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:27.827 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:08:27.827 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:08:27.827 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:08:27.827 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:08:27.827 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:08:27.827 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:08:27.827 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:08:27.827 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:08:27.827 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:08:27.827 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:08:27.827 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:08:27.827 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:08:27.827 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:08:27.827 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:08:27.827 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:08:27.827 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:08:27.827 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:08:27.827 08:05:32 -- spdk/autotest.sh@117 -- # uname -s 00:08:27.827 08:05:32 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:27.827 08:05:32 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:27.827 08:05:32 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:31.123 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:31.123 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:31.123 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:31.123 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:31.123 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:31.123 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:31.383 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:31.383 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:31.383 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:31.383 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:31.383 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:31.383 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:31.383 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:31.383 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:31.383 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:31.383 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:33.293 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:08:33.553 08:05:38 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:34.493 08:05:39 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:34.493 08:05:39 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:34.493 08:05:39 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:34.493 08:05:39 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:34.493 08:05:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:34.493 08:05:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:34.493 08:05:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:34.493 08:05:39 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:34.493 08:05:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:34.493 08:05:39 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:34.493 08:05:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:08:34.493 08:05:39 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:38.692 Waiting for block devices as requested 00:08:38.692 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:08:38.692 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:08:38.692 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:08:38.692 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:08:38.692 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:08:38.692 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:08:38.952 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:08:38.952 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:08:38.952 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:08:39.212 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:08:39.212 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:08:39.472 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:08:39.472 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:08:39.472 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:08:39.472 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:08:39.731 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:08:39.731 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:08:39.993 08:05:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:39.994 08:05:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:08:39.994 08:05:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:08:39.994 08:05:44 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:08:39.994 08:05:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:08:39.994 08:05:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:08:39.994 08:05:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:08:39.994 08:05:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:39.994 08:05:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:39.994 08:05:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:39.994 08:05:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:39.994 08:05:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:39.994 08:05:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:39.994 08:05:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:08:39.994 08:05:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:39.994 08:05:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:39.994 08:05:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:39.994 08:05:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:39.994 08:05:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:39.994 08:05:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:39.994 08:05:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:39.994 08:05:44 -- common/autotest_common.sh@1543 -- # continue 00:08:39.994 08:05:44 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:39.994 08:05:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.994 08:05:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.256 08:05:44 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:40.256 08:05:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:40.256 08:05:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.256 08:05:44 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:44.465 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:44.465 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:08:44.465 08:05:49 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:44.465 08:05:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.465 08:05:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.465 08:05:49 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:44.465 08:05:49 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:44.465 08:05:49 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:44.465 08:05:49 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:44.465 08:05:49 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:44.724 08:05:49 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:44.724 08:05:49 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:44.724 08:05:49 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:44.724 08:05:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:44.724 08:05:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:44.724 08:05:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:44.724 08:05:49 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:44.724 08:05:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:44.724 08:05:49 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:44.724 08:05:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:08:44.724 08:05:49 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:44.724 08:05:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:08:44.724 08:05:49 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:08:44.724 08:05:49 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:08:44.724 08:05:49 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:44.724 08:05:49 -- common/autotest_common.sh@1572 -- # return 0 00:08:44.724 08:05:49 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:44.724 08:05:49 -- common/autotest_common.sh@1580 -- # return 0 00:08:44.724 08:05:49 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:44.724 08:05:49 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:44.724 08:05:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:44.724 08:05:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:44.724 08:05:49 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:44.724 08:05:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.724 08:05:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.724 08:05:49 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:44.724 08:05:49 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:44.724 08:05:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.724 08:05:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.724 08:05:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.724 ************************************ 00:08:44.724 START TEST env 00:08:44.724 ************************************ 00:08:44.724 08:05:49 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:44.724 * Looking for test storage... 00:08:44.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:44.984 08:05:49 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:44.984 08:05:49 env -- common/autotest_common.sh@1693 -- # lcov --version 00:08:44.984 08:05:49 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:44.984 08:05:49 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:44.984 08:05:49 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.984 08:05:49 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.984 08:05:49 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.984 08:05:49 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.984 08:05:49 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.984 08:05:49 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.984 08:05:49 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.984 08:05:49 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.984 08:05:49 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.984 08:05:49 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.984 08:05:49 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.984 08:05:49 env -- scripts/common.sh@344 -- # case "$op" in 00:08:44.984 08:05:49 env -- scripts/common.sh@345 -- # : 1 00:08:44.984 08:05:49 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.984 08:05:49 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.984 08:05:49 env -- scripts/common.sh@365 -- # decimal 1 00:08:44.984 08:05:49 env -- scripts/common.sh@353 -- # local d=1 00:08:44.984 08:05:49 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.984 08:05:49 env -- scripts/common.sh@355 -- # echo 1 00:08:44.984 08:05:49 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.984 08:05:49 env -- scripts/common.sh@366 -- # decimal 2 00:08:44.984 08:05:49 env -- scripts/common.sh@353 -- # local d=2 00:08:44.984 08:05:49 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.984 08:05:49 env -- scripts/common.sh@355 -- # echo 2 00:08:44.984 08:05:49 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.984 08:05:49 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.984 08:05:49 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.984 08:05:49 env -- scripts/common.sh@368 -- # return 0 00:08:44.984 08:05:49 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.984 08:05:49 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:44.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.984 --rc genhtml_branch_coverage=1 00:08:44.984 --rc genhtml_function_coverage=1 00:08:44.984 --rc genhtml_legend=1 00:08:44.984 --rc geninfo_all_blocks=1 00:08:44.984 --rc geninfo_unexecuted_blocks=1 00:08:44.984 00:08:44.984 ' 00:08:44.984 08:05:49 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:44.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.984 --rc genhtml_branch_coverage=1 00:08:44.984 --rc genhtml_function_coverage=1 00:08:44.984 --rc genhtml_legend=1 00:08:44.984 --rc geninfo_all_blocks=1 00:08:44.984 --rc geninfo_unexecuted_blocks=1 00:08:44.984 00:08:44.984 ' 00:08:44.984 08:05:49 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:44.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.984 --rc genhtml_branch_coverage=1 00:08:44.984 --rc genhtml_function_coverage=1 00:08:44.984 --rc genhtml_legend=1 00:08:44.984 --rc geninfo_all_blocks=1 00:08:44.984 --rc geninfo_unexecuted_blocks=1 00:08:44.984 00:08:44.984 ' 00:08:44.984 08:05:49 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:44.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.984 --rc genhtml_branch_coverage=1 00:08:44.984 --rc genhtml_function_coverage=1 00:08:44.984 --rc genhtml_legend=1 00:08:44.984 --rc geninfo_all_blocks=1 00:08:44.984 --rc geninfo_unexecuted_blocks=1 00:08:44.984 00:08:44.984 ' 00:08:44.984 08:05:49 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:44.984 08:05:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.984 08:05:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.984 08:05:49 env -- common/autotest_common.sh@10 -- # set +x 00:08:44.984 ************************************ 00:08:44.984 START TEST env_memory 00:08:44.984 ************************************ 00:08:44.984 08:05:49 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:44.984 00:08:44.984 00:08:44.984 CUnit - A unit testing framework for C - Version 2.1-3 00:08:44.984 http://cunit.sourceforge.net/ 00:08:44.984 00:08:44.984 00:08:44.984 Suite: memory 00:08:44.984 Test: alloc and free memory map ...[2024-11-20 08:05:49.637994] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:44.984 passed 00:08:44.984 Test: mem map translation ...[2024-11-20 08:05:49.663320] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:44.984 [2024-11-20 08:05:49.663339] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:44.984 [2024-11-20 08:05:49.663385] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:44.984 [2024-11-20 08:05:49.663395] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:44.984 passed 00:08:45.246 Test: mem map registration ...[2024-11-20 08:05:49.718403] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:45.246 [2024-11-20 08:05:49.718419] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:45.246 passed 00:08:45.246 Test: mem map adjacent registrations ...passed 00:08:45.246 00:08:45.246 Run Summary: Type Total Ran Passed Failed Inactive 00:08:45.246 suites 1 1 n/a 0 0 00:08:45.246 tests 4 4 4 0 0 00:08:45.246 asserts 152 152 152 0 n/a 00:08:45.246 00:08:45.246 Elapsed time = 0.195 seconds 00:08:45.246 00:08:45.246 real 0m0.209s 00:08:45.246 user 0m0.199s 00:08:45.246 sys 0m0.009s 00:08:45.246 08:05:49 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.246 08:05:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:45.246 ************************************ 00:08:45.246 END TEST env_memory 00:08:45.246 ************************************ 00:08:45.246 08:05:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:45.246 08:05:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.246 08:05:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.246 08:05:49 env -- common/autotest_common.sh@10 -- # set +x 00:08:45.246 ************************************ 00:08:45.246 START TEST env_vtophys 00:08:45.246 ************************************ 00:08:45.246 08:05:49 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:45.246 EAL: lib.eal log level changed from notice to debug 00:08:45.246 EAL: Detected lcore 0 as core 0 on socket 0 00:08:45.246 EAL: Detected lcore 1 as core 1 on socket 0 00:08:45.246 EAL: Detected lcore 2 as core 2 on socket 0 00:08:45.246 EAL: Detected lcore 3 as core 3 on socket 0 00:08:45.246 EAL: Detected lcore 4 as core 4 on socket 0 00:08:45.246 EAL: Detected lcore 5 as core 5 on socket 0 00:08:45.246 EAL: Detected lcore 6 as core 6 on socket 0 00:08:45.246 EAL: Detected lcore 7 as core 7 on socket 0 00:08:45.246 EAL: Detected lcore 8 as core 8 on socket 0 00:08:45.246 EAL: Detected lcore 9 as core 9 on socket 0 00:08:45.246 EAL: Detected lcore 10 as core 10 on socket 0 00:08:45.246 EAL: Detected lcore 11 as core 11 on socket 0 00:08:45.246 EAL: Detected lcore 12 as core 12 on socket 0 00:08:45.246 EAL: Detected lcore 13 as core 13 on socket 0 00:08:45.246 EAL: Detected lcore 14 as core 14 on socket 0 00:08:45.246 EAL: Detected lcore 15 as core 15 on socket 0 00:08:45.246 EAL: Detected lcore 16 as core 16 on socket 0 00:08:45.246 EAL: Detected lcore 17 as core 17 on socket 0 00:08:45.246 EAL: Detected lcore 18 as core 18 on socket 0 00:08:45.246 EAL: Detected lcore 19 as core 19 on socket 0 00:08:45.246 EAL: Detected lcore 20 as core 20 on socket 0 00:08:45.246 EAL: Detected lcore 21 as core 21 on socket 0 00:08:45.246 EAL: Detected lcore 22 as core 22 on socket 0 00:08:45.246 EAL: Detected lcore 23 as core 23 on socket 0 00:08:45.246 EAL: Detected lcore 24 as core 24 on socket 0 00:08:45.246 EAL: Detected lcore 25 as core 25 on socket 0 00:08:45.246 EAL: Detected lcore 26 as core 26 on socket 0 00:08:45.246 EAL: Detected lcore 27 as core 27 on socket 0 00:08:45.246 EAL: Detected lcore 28 as core 28 on socket 0 00:08:45.246 EAL: Detected lcore 29 as core 29 on socket 0 00:08:45.246 EAL: Detected lcore 30 as core 30 on socket 0 00:08:45.246 EAL: Detected lcore 31 as core 31 on socket 0 00:08:45.246 EAL: Detected lcore 32 as core 32 on socket 0 00:08:45.246 EAL: Detected lcore 33 as core 33 on socket 0 00:08:45.246 EAL: Detected lcore 34 as core 34 on socket 0 00:08:45.246 EAL: Detected lcore 35 as core 35 on socket 0 00:08:45.246 EAL: Detected lcore 36 as core 0 on socket 1 00:08:45.246 EAL: Detected lcore 37 as core 1 on socket 1 00:08:45.246 EAL: Detected lcore 38 as core 2 on socket 1 00:08:45.246 EAL: Detected lcore 39 as core 3 on socket 1 00:08:45.246 EAL: Detected lcore 40 as core 4 on socket 1 00:08:45.246 EAL: Detected lcore 41 as core 5 on socket 1 00:08:45.246 EAL: Detected lcore 42 as core 6 on socket 1 00:08:45.246 EAL: Detected lcore 43 as core 7 on socket 1 00:08:45.246 EAL: Detected lcore 44 as core 8 on socket 1 00:08:45.246 EAL: Detected lcore 45 as core 9 on socket 1 00:08:45.246 EAL: Detected lcore 46 as core 10 on socket 1 00:08:45.246 EAL: Detected lcore 47 as core 11 on socket 1 00:08:45.246 EAL: Detected lcore 48 as core 12 on socket 1 00:08:45.246 EAL: Detected lcore 49 as core 13 on socket 1 00:08:45.246 EAL: Detected lcore 50 as core 14 on socket 1 00:08:45.246 EAL: Detected lcore 51 as core 15 on socket 1 00:08:45.246 EAL: Detected lcore 52 as core 16 on socket 1 00:08:45.246 EAL: Detected lcore 53 as core 17 on socket 1 00:08:45.246 EAL: Detected lcore 54 as core 18 on socket 1 00:08:45.246 EAL: Detected lcore 55 as core 19 on socket 1 00:08:45.246 EAL: Detected lcore 56 as core 20 on socket 1 00:08:45.246 EAL: Detected lcore 57 as core 21 on socket 1 00:08:45.246 EAL: Detected lcore 58 as core 22 on socket 1 00:08:45.246 EAL: Detected lcore 59 as core 23 on socket 1 00:08:45.246 EAL: Detected lcore 60 as core 24 on socket 1 00:08:45.246 EAL: Detected lcore 61 as core 25 on socket 1 00:08:45.246 EAL: Detected lcore 62 as core 26 on socket 1 00:08:45.246 EAL: Detected lcore 63 as core 27 on socket 1 00:08:45.246 EAL: Detected lcore 64 as core 28 on socket 1 00:08:45.246 EAL: Detected lcore 65 as core 29 on socket 1 00:08:45.246 EAL: Detected lcore 66 as core 30 on socket 1 00:08:45.246 EAL: Detected lcore 67 as core 31 on socket 1 00:08:45.246 EAL: Detected lcore 68 as core 32 on socket 1 00:08:45.246 EAL: Detected lcore 69 as core 33 on socket 1 00:08:45.246 EAL: Detected lcore 70 as core 34 on socket 1 00:08:45.246 EAL: Detected lcore 71 as core 35 on socket 1 00:08:45.246 EAL: Detected lcore 72 as core 0 on socket 0 00:08:45.246 EAL: Detected lcore 73 as core 1 on socket 0 00:08:45.246 EAL: Detected lcore 74 as core 2 on socket 0 00:08:45.246 EAL: Detected lcore 75 as core 3 on socket 0 00:08:45.246 EAL: Detected lcore 76 as core 4 on socket 0 00:08:45.246 EAL: Detected lcore 77 as core 5 on socket 0 00:08:45.246 EAL: Detected lcore 78 as core 6 on socket 0 00:08:45.246 EAL: Detected lcore 79 as core 7 on socket 0 00:08:45.246 EAL: Detected lcore 80 as core 8 on socket 0 00:08:45.246 EAL: Detected lcore 81 as core 9 on socket 0 00:08:45.246 EAL: Detected lcore 82 as core 10 on socket 0 00:08:45.246 EAL: Detected lcore 83 as core 11 on socket 0 00:08:45.246 EAL: Detected lcore 84 as core 12 on socket 0 00:08:45.246 EAL: Detected lcore 85 as core 13 on socket 0 00:08:45.246 EAL: Detected lcore 86 as core 14 on socket 0 00:08:45.246 EAL: Detected lcore 87 as core 15 on socket 0 00:08:45.246 EAL: Detected lcore 88 as core 16 on socket 0 00:08:45.246 EAL: Detected lcore 89 as core 17 on socket 0 00:08:45.246 EAL: Detected lcore 90 as core 18 on socket 0 00:08:45.246 EAL: Detected lcore 91 as core 19 on socket 0 00:08:45.246 EAL: Detected lcore 92 as core 20 on socket 0 00:08:45.246 EAL: Detected lcore 93 as core 21 on socket 0 00:08:45.246 EAL: Detected lcore 94 as core 22 on socket 0 00:08:45.246 EAL: Detected lcore 95 as core 23 on socket 0 00:08:45.246 EAL: Detected lcore 96 as core 24 on socket 0 00:08:45.246 EAL: Detected lcore 97 as core 25 on socket 0 00:08:45.246 EAL: Detected lcore 98 as core 26 on socket 0 00:08:45.246 EAL: Detected lcore 99 as core 27 on socket 0 00:08:45.246 EAL: Detected lcore 100 as core 28 on socket 0 00:08:45.246 EAL: Detected lcore 101 as core 29 on socket 0 00:08:45.246 EAL: Detected lcore 102 as core 30 on socket 0 00:08:45.246 EAL: Detected lcore 103 as core 31 on socket 0 00:08:45.246 EAL: Detected lcore 104 as core 32 on socket 0 00:08:45.246 EAL: Detected lcore 105 as core 33 on socket 0 00:08:45.246 EAL: Detected lcore 106 as core 34 on socket 0 00:08:45.246 EAL: Detected lcore 107 as core 35 on socket 0 00:08:45.246 EAL: Detected lcore 108 as core 0 on socket 1 00:08:45.246 EAL: Detected lcore 109 as core 1 on socket 1 00:08:45.246 EAL: Detected lcore 110 as core 2 on socket 1 00:08:45.246 EAL: Detected lcore 111 as core 3 on socket 1 00:08:45.246 EAL: Detected lcore 112 as core 4 on socket 1 00:08:45.246 EAL: Detected lcore 113 as core 5 on socket 1 00:08:45.246 EAL: Detected lcore 114 as core 6 on socket 1 00:08:45.246 EAL: Detected lcore 115 as core 7 on socket 1 00:08:45.246 EAL: Detected lcore 116 as core 8 on socket 1 00:08:45.246 EAL: Detected lcore 117 as core 9 on socket 1 00:08:45.246 EAL: Detected lcore 118 as core 10 on socket 1 00:08:45.246 EAL: Detected lcore 119 as core 11 on socket 1 00:08:45.246 EAL: Detected lcore 120 as core 12 on socket 1 00:08:45.246 EAL: Detected lcore 121 as core 13 on socket 1 00:08:45.246 EAL: Detected lcore 122 as core 14 on socket 1 00:08:45.246 EAL: Detected lcore 123 as core 15 on socket 1 00:08:45.246 EAL: Detected lcore 124 as core 16 on socket 1 00:08:45.246 EAL: Detected lcore 125 as core 17 on socket 1 00:08:45.246 EAL: Detected lcore 126 as core 18 on socket 1 00:08:45.246 EAL: Detected lcore 127 as core 19 on socket 1 00:08:45.246 EAL: Skipped lcore 128 as core 20 on socket 1 00:08:45.246 EAL: Skipped lcore 129 as core 21 on socket 1 00:08:45.247 EAL: Skipped lcore 130 as core 22 on socket 1 00:08:45.247 EAL: Skipped lcore 131 as core 23 on socket 1 00:08:45.247 EAL: Skipped lcore 132 as core 24 on socket 1 00:08:45.247 EAL: Skipped lcore 133 as core 25 on socket 1 00:08:45.247 EAL: Skipped lcore 134 as core 26 on socket 1 00:08:45.247 EAL: Skipped lcore 135 as core 27 on socket 1 00:08:45.247 EAL: Skipped lcore 136 as core 28 on socket 1 00:08:45.247 EAL: Skipped lcore 137 as core 29 on socket 1 00:08:45.247 EAL: Skipped lcore 138 as core 30 on socket 1 00:08:45.247 EAL: Skipped lcore 139 as core 31 on socket 1 00:08:45.247 EAL: Skipped lcore 140 as core 32 on socket 1 00:08:45.247 EAL: Skipped lcore 141 as core 33 on socket 1 00:08:45.247 EAL: Skipped lcore 142 as core 34 on socket 1 00:08:45.247 EAL: Skipped lcore 143 as core 35 on socket 1 00:08:45.247 EAL: Maximum logical cores by configuration: 128 00:08:45.247 EAL: Detected CPU lcores: 128 00:08:45.247 EAL: Detected NUMA nodes: 2 00:08:45.247 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:45.247 EAL: Detected shared linkage of DPDK 00:08:45.247 EAL: No shared files mode enabled, IPC will be disabled 00:08:45.247 EAL: Bus pci wants IOVA as 'DC' 00:08:45.247 EAL: Buses did not request a specific IOVA mode. 00:08:45.247 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:45.247 EAL: Selected IOVA mode 'VA' 00:08:45.247 EAL: Probing VFIO support... 00:08:45.247 EAL: IOMMU type 1 (Type 1) is supported 00:08:45.247 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:45.247 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:45.247 EAL: VFIO support initialized 00:08:45.247 EAL: Ask a virtual area of 0x2e000 bytes 00:08:45.247 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:45.247 EAL: Setting up physically contiguous memory... 00:08:45.247 EAL: Setting maximum number of open files to 524288 00:08:45.247 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:45.247 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:45.247 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:45.247 EAL: Ask a virtual area of 0x61000 bytes 00:08:45.247 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:45.247 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:45.247 EAL: Ask a virtual area of 0x400000000 bytes 00:08:45.247 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:45.247 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:45.247 EAL: Ask a virtual area of 0x61000 bytes 00:08:45.247 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:45.247 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:45.247 EAL: Ask a virtual area of 0x400000000 bytes 00:08:45.247 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:45.247 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:45.247 EAL: Ask a virtual area of 0x61000 bytes 00:08:45.247 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:45.247 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:45.247 EAL: Ask a virtual area of 0x400000000 bytes 00:08:45.247 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:45.247 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:45.247 EAL: Ask a virtual area of 0x61000 bytes 00:08:45.247 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:45.247 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:45.247 EAL: Ask a virtual area of 0x400000000 bytes 00:08:45.247 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:45.247 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:45.247 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:45.247 EAL: Ask a virtual area of 0x61000 bytes 00:08:45.247 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:45.247 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:45.247 EAL: Ask a virtual area of 0x400000000 bytes 00:08:45.247 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:45.247 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:45.247 EAL: Ask a virtual area of 0x61000 bytes 00:08:45.247 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:45.247 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:45.247 EAL: Ask a virtual area of 0x400000000 bytes 00:08:45.247 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:45.247 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:45.247 EAL: Ask a virtual area of 0x61000 bytes 00:08:45.247 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:45.247 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:45.247 EAL: Ask a virtual area of 0x400000000 bytes 00:08:45.247 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:45.247 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:45.247 EAL: Ask a virtual area of 0x61000 bytes 00:08:45.247 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:45.247 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:45.247 EAL: Ask a virtual area of 0x400000000 bytes 00:08:45.247 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:45.247 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:45.247 EAL: Hugepages will be freed exactly as allocated. 00:08:45.247 EAL: No shared files mode enabled, IPC is disabled 00:08:45.247 EAL: No shared files mode enabled, IPC is disabled 00:08:45.247 EAL: TSC frequency is ~2400000 KHz 00:08:45.247 EAL: Main lcore 0 is ready (tid=7ff39e1a9a00;cpuset=[0]) 00:08:45.247 EAL: Trying to obtain current memory policy. 00:08:45.247 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.247 EAL: Restoring previous memory policy: 0 00:08:45.247 EAL: request: mp_malloc_sync 00:08:45.247 EAL: No shared files mode enabled, IPC is disabled 00:08:45.247 EAL: Heap on socket 0 was expanded by 2MB 00:08:45.247 EAL: No shared files mode enabled, IPC is disabled 00:08:45.247 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:45.247 EAL: Mem event callback 'spdk:(nil)' registered 00:08:45.508 00:08:45.508 00:08:45.508 CUnit - A unit testing framework for C - Version 2.1-3 00:08:45.508 http://cunit.sourceforge.net/ 00:08:45.508 00:08:45.508 00:08:45.508 Suite: components_suite 00:08:45.508 Test: vtophys_malloc_test ...passed 00:08:45.508 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:45.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.508 EAL: Restoring previous memory policy: 4 00:08:45.508 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.508 EAL: request: mp_malloc_sync 00:08:45.508 EAL: No shared files mode enabled, IPC is disabled 00:08:45.508 EAL: Heap on socket 0 was expanded by 4MB 00:08:45.508 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.508 EAL: request: mp_malloc_sync 00:08:45.508 EAL: No shared files mode enabled, IPC is disabled 00:08:45.508 EAL: Heap on socket 0 was shrunk by 4MB 00:08:45.508 EAL: Trying to obtain current memory policy. 00:08:45.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.508 EAL: Restoring previous memory policy: 4 00:08:45.508 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.508 EAL: request: mp_malloc_sync 00:08:45.508 EAL: No shared files mode enabled, IPC is disabled 00:08:45.508 EAL: Heap on socket 0 was expanded by 6MB 00:08:45.508 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.508 EAL: request: mp_malloc_sync 00:08:45.508 EAL: No shared files mode enabled, IPC is disabled 00:08:45.508 EAL: Heap on socket 0 was shrunk by 6MB 00:08:45.508 EAL: Trying to obtain current memory policy. 00:08:45.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.508 EAL: Restoring previous memory policy: 4 00:08:45.508 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.508 EAL: request: mp_malloc_sync 00:08:45.508 EAL: No shared files mode enabled, IPC is disabled 00:08:45.508 EAL: Heap on socket 0 was expanded by 10MB 00:08:45.508 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.508 EAL: request: mp_malloc_sync 00:08:45.508 EAL: No shared files mode enabled, IPC is disabled 00:08:45.508 EAL: Heap on socket 0 was shrunk by 10MB 00:08:45.508 EAL: Trying to obtain current memory policy. 00:08:45.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.508 EAL: Restoring previous memory policy: 4 00:08:45.508 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.508 EAL: request: mp_malloc_sync 00:08:45.508 EAL: No shared files mode enabled, IPC is disabled 00:08:45.508 EAL: Heap on socket 0 was expanded by 18MB 00:08:45.508 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.508 EAL: request: mp_malloc_sync 00:08:45.508 EAL: No shared files mode enabled, IPC is disabled 00:08:45.508 EAL: Heap on socket 0 was shrunk by 18MB 00:08:45.508 EAL: Trying to obtain current memory policy. 00:08:45.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.508 EAL: Restoring previous memory policy: 4 00:08:45.508 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.508 EAL: request: mp_malloc_sync 00:08:45.508 EAL: No shared files mode enabled, IPC is disabled 00:08:45.508 EAL: Heap on socket 0 was expanded by 34MB 00:08:45.508 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.508 EAL: request: mp_malloc_sync 00:08:45.508 EAL: No shared files mode enabled, IPC is disabled 00:08:45.508 EAL: Heap on socket 0 was shrunk by 34MB 00:08:45.508 EAL: Trying to obtain current memory policy. 00:08:45.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.508 EAL: Restoring previous memory policy: 4 00:08:45.508 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.508 EAL: request: mp_malloc_sync 00:08:45.508 EAL: No shared files mode enabled, IPC is disabled 00:08:45.508 EAL: Heap on socket 0 was expanded by 66MB 00:08:45.508 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.508 EAL: request: mp_malloc_sync 00:08:45.508 EAL: No shared files mode enabled, IPC is disabled 00:08:45.508 EAL: Heap on socket 0 was shrunk by 66MB 00:08:45.508 EAL: Trying to obtain current memory policy. 00:08:45.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.508 EAL: Restoring previous memory policy: 4 00:08:45.508 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.508 EAL: request: mp_malloc_sync 00:08:45.508 EAL: No shared files mode enabled, IPC is disabled 00:08:45.508 EAL: Heap on socket 0 was expanded by 130MB 00:08:45.508 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.508 EAL: request: mp_malloc_sync 00:08:45.508 EAL: No shared files mode enabled, IPC is disabled 00:08:45.508 EAL: Heap on socket 0 was shrunk by 130MB 00:08:45.508 EAL: Trying to obtain current memory policy. 00:08:45.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.508 EAL: Restoring previous memory policy: 4 00:08:45.509 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.509 EAL: request: mp_malloc_sync 00:08:45.509 EAL: No shared files mode enabled, IPC is disabled 00:08:45.509 EAL: Heap on socket 0 was expanded by 258MB 00:08:45.509 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.509 EAL: request: mp_malloc_sync 00:08:45.509 EAL: No shared files mode enabled, IPC is disabled 00:08:45.509 EAL: Heap on socket 0 was shrunk by 258MB 00:08:45.509 EAL: Trying to obtain current memory policy. 00:08:45.509 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.769 EAL: Restoring previous memory policy: 4 00:08:45.769 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.769 EAL: request: mp_malloc_sync 00:08:45.769 EAL: No shared files mode enabled, IPC is disabled 00:08:45.769 EAL: Heap on socket 0 was expanded by 514MB 00:08:45.769 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.769 EAL: request: mp_malloc_sync 00:08:45.769 EAL: No shared files mode enabled, IPC is disabled 00:08:45.769 EAL: Heap on socket 0 was shrunk by 514MB 00:08:45.769 EAL: Trying to obtain current memory policy. 00:08:45.769 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:46.029 EAL: Restoring previous memory policy: 4 00:08:46.029 EAL: Calling mem event callback 'spdk:(nil)' 00:08:46.029 EAL: request: mp_malloc_sync 00:08:46.029 EAL: No shared files mode enabled, IPC is disabled 00:08:46.029 EAL: Heap on socket 0 was expanded by 1026MB 00:08:46.029 EAL: Calling mem event callback 'spdk:(nil)' 00:08:46.029 EAL: request: mp_malloc_sync 00:08:46.029 EAL: No shared files mode enabled, IPC is disabled 00:08:46.029 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:46.029 passed 00:08:46.029 00:08:46.029 Run Summary: Type Total Ran Passed Failed Inactive 00:08:46.029 suites 1 1 n/a 0 0 00:08:46.029 tests 2 2 2 0 0 00:08:46.029 asserts 497 497 497 0 n/a 00:08:46.029 00:08:46.029 Elapsed time = 0.667 seconds 00:08:46.029 EAL: Calling mem event callback 'spdk:(nil)' 00:08:46.029 EAL: request: mp_malloc_sync 00:08:46.029 EAL: No shared files mode enabled, IPC is disabled 00:08:46.029 EAL: Heap on socket 0 was shrunk by 2MB 00:08:46.029 EAL: No shared files mode enabled, IPC is disabled 00:08:46.029 EAL: No shared files mode enabled, IPC is disabled 00:08:46.029 EAL: No shared files mode enabled, IPC is disabled 00:08:46.029 00:08:46.029 real 0m0.834s 00:08:46.029 user 0m0.436s 00:08:46.029 sys 0m0.349s 00:08:46.029 08:05:50 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.029 08:05:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:46.029 ************************************ 00:08:46.029 END TEST env_vtophys 00:08:46.029 ************************************ 00:08:46.029 08:05:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:46.029 08:05:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.029 08:05:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.029 08:05:50 env -- common/autotest_common.sh@10 -- # set +x 00:08:46.291 ************************************ 00:08:46.291 START TEST env_pci 00:08:46.291 ************************************ 00:08:46.291 08:05:50 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:46.291 00:08:46.291 00:08:46.291 CUnit - A unit testing framework for C - Version 2.1-3 00:08:46.291 http://cunit.sourceforge.net/ 00:08:46.291 00:08:46.291 00:08:46.291 Suite: pci 00:08:46.291 Test: pci_hook ...[2024-11-20 08:05:50.799054] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1747482 has claimed it 00:08:46.291 EAL: Cannot find device (10000:00:01.0) 00:08:46.291 EAL: Failed to attach device on primary process 00:08:46.291 passed 00:08:46.291 00:08:46.291 Run Summary: Type Total Ran Passed Failed Inactive 00:08:46.291 suites 1 1 n/a 0 0 00:08:46.291 tests 1 1 1 0 0 00:08:46.291 asserts 25 25 25 0 n/a 00:08:46.291 00:08:46.291 Elapsed time = 0.034 seconds 00:08:46.291 00:08:46.291 real 0m0.055s 00:08:46.291 user 0m0.020s 00:08:46.291 sys 0m0.034s 00:08:46.291 08:05:50 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.291 08:05:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:46.291 ************************************ 00:08:46.291 END TEST env_pci 00:08:46.291 ************************************ 00:08:46.291 08:05:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:46.291 08:05:50 env -- env/env.sh@15 -- # uname 00:08:46.291 08:05:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:46.291 08:05:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:46.291 08:05:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:46.291 08:05:50 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:46.291 08:05:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.291 08:05:50 env -- common/autotest_common.sh@10 -- # set +x 00:08:46.291 ************************************ 00:08:46.291 START TEST env_dpdk_post_init 00:08:46.291 ************************************ 00:08:46.291 08:05:50 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:46.291 EAL: Detected CPU lcores: 128 00:08:46.291 EAL: Detected NUMA nodes: 2 00:08:46.291 EAL: Detected shared linkage of DPDK 00:08:46.291 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:46.291 EAL: Selected IOVA mode 'VA' 00:08:46.291 EAL: VFIO support initialized 00:08:46.291 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:46.551 EAL: Using IOMMU type 1 (Type 1) 00:08:46.551 EAL: Ignore mapping IO port bar(1) 00:08:46.812 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:08:46.812 EAL: Ignore mapping IO port bar(1) 00:08:46.812 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:08:47.072 EAL: Ignore mapping IO port bar(1) 00:08:47.072 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:08:47.332 EAL: Ignore mapping IO port bar(1) 00:08:47.332 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:08:47.592 EAL: Ignore mapping IO port bar(1) 00:08:47.592 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:08:47.592 EAL: Ignore mapping IO port bar(1) 00:08:47.852 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:08:47.852 EAL: Ignore mapping IO port bar(1) 00:08:48.112 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:08:48.112 EAL: Ignore mapping IO port bar(1) 00:08:48.373 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:08:48.373 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:08:48.633 EAL: Ignore mapping IO port bar(1) 00:08:48.633 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:08:48.894 EAL: Ignore mapping IO port bar(1) 00:08:48.894 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:08:49.153 EAL: Ignore mapping IO port bar(1) 00:08:49.153 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:08:49.153 EAL: Ignore mapping IO port bar(1) 00:08:49.413 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:08:49.413 EAL: Ignore mapping IO port bar(1) 00:08:49.672 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:08:49.673 EAL: Ignore mapping IO port bar(1) 00:08:49.932 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:08:49.932 EAL: Ignore mapping IO port bar(1) 00:08:49.932 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:08:50.192 EAL: Ignore mapping IO port bar(1) 00:08:50.192 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:08:50.192 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:08:50.192 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:08:50.451 Starting DPDK initialization... 00:08:50.451 Starting SPDK post initialization... 00:08:50.451 SPDK NVMe probe 00:08:50.451 Attaching to 0000:65:00.0 00:08:50.451 Attached to 0000:65:00.0 00:08:50.451 Cleaning up... 00:08:52.362 00:08:52.362 real 0m5.747s 00:08:52.362 user 0m0.108s 00:08:52.362 sys 0m0.174s 00:08:52.362 08:05:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.362 08:05:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:52.362 ************************************ 00:08:52.362 END TEST env_dpdk_post_init 00:08:52.362 ************************************ 00:08:52.362 08:05:56 env -- env/env.sh@26 -- # uname 00:08:52.362 08:05:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:52.362 08:05:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:52.362 08:05:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.362 08:05:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.362 08:05:56 env -- common/autotest_common.sh@10 -- # set +x 00:08:52.362 ************************************ 00:08:52.362 START TEST env_mem_callbacks 00:08:52.362 ************************************ 00:08:52.362 08:05:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:52.362 EAL: Detected CPU lcores: 128 00:08:52.362 EAL: Detected NUMA nodes: 2 00:08:52.362 EAL: Detected shared linkage of DPDK 00:08:52.362 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:52.362 EAL: Selected IOVA mode 'VA' 00:08:52.362 EAL: VFIO support initialized 00:08:52.362 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:52.362 00:08:52.362 00:08:52.362 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.362 http://cunit.sourceforge.net/ 00:08:52.362 00:08:52.362 00:08:52.362 Suite: memory 00:08:52.362 Test: test ... 00:08:52.362 register 0x200000200000 2097152 00:08:52.362 malloc 3145728 00:08:52.362 register 0x200000400000 4194304 00:08:52.362 buf 0x200000500000 len 3145728 PASSED 00:08:52.362 malloc 64 00:08:52.362 buf 0x2000004fff40 len 64 PASSED 00:08:52.362 malloc 4194304 00:08:52.362 register 0x200000800000 6291456 00:08:52.362 buf 0x200000a00000 len 4194304 PASSED 00:08:52.362 free 0x200000500000 3145728 00:08:52.363 free 0x2000004fff40 64 00:08:52.363 unregister 0x200000400000 4194304 PASSED 00:08:52.363 free 0x200000a00000 4194304 00:08:52.363 unregister 0x200000800000 6291456 PASSED 00:08:52.363 malloc 8388608 00:08:52.363 register 0x200000400000 10485760 00:08:52.363 buf 0x200000600000 len 8388608 PASSED 00:08:52.363 free 0x200000600000 8388608 00:08:52.363 unregister 0x200000400000 10485760 PASSED 00:08:52.363 passed 00:08:52.363 00:08:52.363 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.363 suites 1 1 n/a 0 0 00:08:52.363 tests 1 1 1 0 0 00:08:52.363 asserts 15 15 15 0 n/a 00:08:52.363 00:08:52.363 Elapsed time = 0.005 seconds 00:08:52.363 00:08:52.363 real 0m0.063s 00:08:52.363 user 0m0.021s 00:08:52.363 sys 0m0.042s 00:08:52.363 08:05:56 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.363 08:05:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:52.363 ************************************ 00:08:52.363 END TEST env_mem_callbacks 00:08:52.363 ************************************ 00:08:52.363 00:08:52.363 real 0m7.501s 00:08:52.363 user 0m1.054s 00:08:52.363 sys 0m0.961s 00:08:52.363 08:05:56 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.363 08:05:56 env -- common/autotest_common.sh@10 -- # set +x 00:08:52.363 ************************************ 00:08:52.363 END TEST env 00:08:52.363 ************************************ 00:08:52.363 08:05:56 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:52.363 08:05:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.363 08:05:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.363 08:05:56 -- common/autotest_common.sh@10 -- # set +x 00:08:52.363 ************************************ 00:08:52.363 START TEST rpc 00:08:52.363 ************************************ 00:08:52.363 08:05:56 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:52.363 * Looking for test storage... 00:08:52.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:52.363 08:05:57 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.363 08:05:57 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.363 08:05:57 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.624 08:05:57 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.624 08:05:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.624 08:05:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.624 08:05:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.624 08:05:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.624 08:05:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.624 08:05:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.624 08:05:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.624 08:05:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.624 08:05:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.624 08:05:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.624 08:05:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.624 08:05:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:52.624 08:05:57 rpc -- scripts/common.sh@345 -- # : 1 00:08:52.624 08:05:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.624 08:05:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.624 08:05:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:52.624 08:05:57 rpc -- scripts/common.sh@353 -- # local d=1 00:08:52.624 08:05:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.624 08:05:57 rpc -- scripts/common.sh@355 -- # echo 1 00:08:52.624 08:05:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.624 08:05:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:52.624 08:05:57 rpc -- scripts/common.sh@353 -- # local d=2 00:08:52.624 08:05:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.624 08:05:57 rpc -- scripts/common.sh@355 -- # echo 2 00:08:52.624 08:05:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.624 08:05:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.624 08:05:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.624 08:05:57 rpc -- scripts/common.sh@368 -- # return 0 00:08:52.624 08:05:57 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.624 08:05:57 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.624 --rc genhtml_branch_coverage=1 00:08:52.624 --rc genhtml_function_coverage=1 00:08:52.624 --rc genhtml_legend=1 00:08:52.624 --rc geninfo_all_blocks=1 00:08:52.624 --rc geninfo_unexecuted_blocks=1 00:08:52.624 00:08:52.624 ' 00:08:52.624 08:05:57 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.624 --rc genhtml_branch_coverage=1 00:08:52.624 --rc genhtml_function_coverage=1 00:08:52.624 --rc genhtml_legend=1 00:08:52.624 --rc geninfo_all_blocks=1 00:08:52.624 --rc geninfo_unexecuted_blocks=1 00:08:52.624 00:08:52.624 ' 00:08:52.624 08:05:57 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.624 --rc genhtml_branch_coverage=1 00:08:52.624 --rc genhtml_function_coverage=1 00:08:52.624 --rc genhtml_legend=1 00:08:52.624 --rc geninfo_all_blocks=1 00:08:52.624 --rc geninfo_unexecuted_blocks=1 00:08:52.624 00:08:52.624 ' 00:08:52.624 08:05:57 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.624 --rc genhtml_branch_coverage=1 00:08:52.624 --rc genhtml_function_coverage=1 00:08:52.624 --rc genhtml_legend=1 00:08:52.624 --rc geninfo_all_blocks=1 00:08:52.624 --rc geninfo_unexecuted_blocks=1 00:08:52.624 00:08:52.624 ' 00:08:52.624 08:05:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1748828 00:08:52.624 08:05:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:52.624 08:05:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1748828 00:08:52.624 08:05:57 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:52.624 08:05:57 rpc -- common/autotest_common.sh@835 -- # '[' -z 1748828 ']' 00:08:52.624 08:05:57 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.624 08:05:57 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.624 08:05:57 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.624 08:05:57 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.624 08:05:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.624 [2024-11-20 08:05:57.196244] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:08:52.624 [2024-11-20 08:05:57.196322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748828 ] 00:08:52.624 [2024-11-20 08:05:57.278660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.624 [2024-11-20 08:05:57.320143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:52.624 [2024-11-20 08:05:57.320179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1748828' to capture a snapshot of events at runtime. 00:08:52.624 [2024-11-20 08:05:57.320187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.624 [2024-11-20 08:05:57.320193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.624 [2024-11-20 08:05:57.320199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1748828 for offline analysis/debug. 00:08:52.624 [2024-11-20 08:05:57.320801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.565 08:05:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.565 08:05:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:53.565 08:05:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:53.565 08:05:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:53.565 08:05:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:53.565 08:05:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:53.565 08:05:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.565 08:05:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.565 08:05:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.565 ************************************ 00:08:53.565 START TEST rpc_integrity 00:08:53.565 ************************************ 00:08:53.565 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:53.565 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:53.565 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.565 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:53.565 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.565 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:53.565 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:53.565 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:53.565 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:53.565 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.565 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:53.565 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.565 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:53.565 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:53.565 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.565 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:53.565 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.565 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:53.565 { 00:08:53.565 "name": "Malloc0", 00:08:53.565 "aliases": [ 00:08:53.565 "02838baa-3fa2-4a72-8379-164aeec98e58" 00:08:53.565 ], 00:08:53.565 "product_name": "Malloc disk", 00:08:53.565 "block_size": 512, 00:08:53.565 "num_blocks": 16384, 00:08:53.565 "uuid": "02838baa-3fa2-4a72-8379-164aeec98e58", 00:08:53.565 "assigned_rate_limits": { 00:08:53.565 "rw_ios_per_sec": 0, 00:08:53.565 "rw_mbytes_per_sec": 0, 00:08:53.565 "r_mbytes_per_sec": 0, 00:08:53.565 "w_mbytes_per_sec": 0 00:08:53.565 }, 00:08:53.565 "claimed": false, 00:08:53.565 "zoned": false, 00:08:53.565 "supported_io_types": { 00:08:53.565 "read": true, 00:08:53.565 "write": true, 00:08:53.565 "unmap": true, 00:08:53.565 "flush": true, 00:08:53.565 "reset": true, 00:08:53.565 "nvme_admin": false, 00:08:53.565 "nvme_io": false, 00:08:53.565 "nvme_io_md": false, 00:08:53.565 "write_zeroes": true, 00:08:53.565 "zcopy": true, 00:08:53.565 "get_zone_info": false, 00:08:53.565 "zone_management": false, 00:08:53.565 "zone_append": false, 00:08:53.565 "compare": false, 00:08:53.565 "compare_and_write": false, 00:08:53.565 "abort": true, 00:08:53.565 "seek_hole": false, 00:08:53.565 "seek_data": false, 00:08:53.565 "copy": true, 00:08:53.565 "nvme_iov_md": false 00:08:53.565 }, 00:08:53.565 "memory_domains": [ 00:08:53.565 { 00:08:53.565 "dma_device_id": "system", 00:08:53.565 "dma_device_type": 1 00:08:53.565 }, 00:08:53.565 { 00:08:53.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.565 "dma_device_type": 2 00:08:53.565 } 00:08:53.565 ], 00:08:53.565 "driver_specific": {} 00:08:53.565 } 00:08:53.565 ]' 00:08:53.565 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:53.565 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:53.565 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:53.565 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.565 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:53.565 [2024-11-20 08:05:58.165629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:53.565 [2024-11-20 08:05:58.165661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.565 [2024-11-20 08:05:58.165674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6a1b10 00:08:53.565 [2024-11-20 08:05:58.165681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.565 [2024-11-20 08:05:58.167047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.565 [2024-11-20 08:05:58.167069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:53.565 Passthru0 00:08:53.565 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.565 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:53.566 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.566 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:53.566 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.566 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:53.566 { 00:08:53.566 "name": "Malloc0", 00:08:53.566 "aliases": [ 00:08:53.566 "02838baa-3fa2-4a72-8379-164aeec98e58" 00:08:53.566 ], 00:08:53.566 "product_name": "Malloc disk", 00:08:53.566 "block_size": 512, 00:08:53.566 "num_blocks": 16384, 00:08:53.566 "uuid": "02838baa-3fa2-4a72-8379-164aeec98e58", 00:08:53.566 "assigned_rate_limits": { 00:08:53.566 "rw_ios_per_sec": 0, 00:08:53.566 "rw_mbytes_per_sec": 0, 00:08:53.566 "r_mbytes_per_sec": 0, 00:08:53.566 "w_mbytes_per_sec": 0 00:08:53.566 }, 00:08:53.566 "claimed": true, 00:08:53.566 "claim_type": "exclusive_write", 00:08:53.566 "zoned": false, 00:08:53.566 "supported_io_types": { 00:08:53.566 "read": true, 00:08:53.566 "write": true, 00:08:53.566 "unmap": true, 00:08:53.566 "flush": true, 00:08:53.566 "reset": true, 00:08:53.566 "nvme_admin": false, 00:08:53.566 "nvme_io": false, 00:08:53.566 "nvme_io_md": false, 00:08:53.566 "write_zeroes": true, 00:08:53.566 "zcopy": true, 00:08:53.566 "get_zone_info": false, 00:08:53.566 "zone_management": false, 00:08:53.566 "zone_append": false, 00:08:53.566 "compare": false, 00:08:53.566 "compare_and_write": false, 00:08:53.566 "abort": true, 00:08:53.566 "seek_hole": false, 00:08:53.566 "seek_data": false, 00:08:53.566 "copy": true, 00:08:53.566 "nvme_iov_md": false 00:08:53.566 }, 00:08:53.566 "memory_domains": [ 00:08:53.566 { 00:08:53.566 "dma_device_id": "system", 00:08:53.566 "dma_device_type": 1 00:08:53.566 }, 00:08:53.566 { 00:08:53.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.566 "dma_device_type": 2 00:08:53.566 } 00:08:53.566 ], 00:08:53.566 "driver_specific": {} 00:08:53.566 }, 00:08:53.566 { 00:08:53.566 "name": "Passthru0", 00:08:53.566 "aliases": [ 00:08:53.566 "f09b470a-1382-547d-b84c-6447a10757ad" 00:08:53.566 ], 00:08:53.566 "product_name": "passthru", 00:08:53.566 "block_size": 512, 00:08:53.566 "num_blocks": 16384, 00:08:53.566 "uuid": "f09b470a-1382-547d-b84c-6447a10757ad", 00:08:53.566 "assigned_rate_limits": { 00:08:53.566 "rw_ios_per_sec": 0, 00:08:53.566 "rw_mbytes_per_sec": 0, 00:08:53.566 "r_mbytes_per_sec": 0, 00:08:53.566 "w_mbytes_per_sec": 0 00:08:53.566 }, 00:08:53.566 "claimed": false, 00:08:53.566 "zoned": false, 00:08:53.566 "supported_io_types": { 00:08:53.566 "read": true, 00:08:53.566 "write": true, 00:08:53.566 "unmap": true, 00:08:53.566 "flush": true, 00:08:53.566 "reset": true, 00:08:53.566 "nvme_admin": false, 00:08:53.566 "nvme_io": false, 00:08:53.566 "nvme_io_md": false, 00:08:53.566 "write_zeroes": true, 00:08:53.566 "zcopy": true, 00:08:53.566 "get_zone_info": false, 00:08:53.566 "zone_management": false, 00:08:53.566 "zone_append": false, 00:08:53.566 "compare": false, 00:08:53.566 "compare_and_write": false, 00:08:53.566 "abort": true, 00:08:53.566 "seek_hole": false, 00:08:53.566 "seek_data": false, 00:08:53.566 "copy": true, 00:08:53.566 "nvme_iov_md": false 00:08:53.566 }, 00:08:53.566 "memory_domains": [ 00:08:53.566 { 00:08:53.566 "dma_device_id": "system", 00:08:53.566 "dma_device_type": 1 00:08:53.566 }, 00:08:53.566 { 00:08:53.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.566 "dma_device_type": 2 00:08:53.566 } 00:08:53.566 ], 00:08:53.566 "driver_specific": { 00:08:53.566 "passthru": { 00:08:53.566 "name": "Passthru0", 00:08:53.566 "base_bdev_name": "Malloc0" 00:08:53.566 } 00:08:53.566 } 00:08:53.566 } 00:08:53.566 ]' 00:08:53.566 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:53.566 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:53.566 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:53.566 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.566 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:53.566 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.566 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:53.566 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.566 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:53.566 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.566 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:53.566 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.566 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:53.566 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.566 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:53.566 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:53.827 08:05:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:53.827 00:08:53.827 real 0m0.300s 00:08:53.827 user 0m0.191s 00:08:53.827 sys 0m0.038s 00:08:53.827 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.827 08:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:53.827 ************************************ 00:08:53.827 END TEST rpc_integrity 00:08:53.827 ************************************ 00:08:53.827 08:05:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:53.827 08:05:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.827 08:05:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.827 08:05:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.827 ************************************ 00:08:53.827 START TEST rpc_plugins 00:08:53.827 ************************************ 00:08:53.827 08:05:58 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:53.827 08:05:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:53.827 08:05:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.827 08:05:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:53.827 08:05:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.827 08:05:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:53.827 08:05:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:53.827 08:05:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.827 08:05:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:53.827 08:05:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.827 08:05:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:53.827 { 00:08:53.827 "name": "Malloc1", 00:08:53.827 "aliases": [ 00:08:53.827 "76f48f21-28bb-488f-8988-97990bc52854" 00:08:53.827 ], 00:08:53.827 "product_name": "Malloc disk", 00:08:53.827 "block_size": 4096, 00:08:53.827 "num_blocks": 256, 00:08:53.827 "uuid": "76f48f21-28bb-488f-8988-97990bc52854", 00:08:53.827 "assigned_rate_limits": { 00:08:53.827 "rw_ios_per_sec": 0, 00:08:53.827 "rw_mbytes_per_sec": 0, 00:08:53.827 "r_mbytes_per_sec": 0, 00:08:53.827 "w_mbytes_per_sec": 0 00:08:53.827 }, 00:08:53.827 "claimed": false, 00:08:53.827 "zoned": false, 00:08:53.827 "supported_io_types": { 00:08:53.827 "read": true, 00:08:53.827 "write": true, 00:08:53.827 "unmap": true, 00:08:53.827 "flush": true, 00:08:53.827 "reset": true, 00:08:53.827 "nvme_admin": false, 00:08:53.827 "nvme_io": false, 00:08:53.827 "nvme_io_md": false, 00:08:53.827 "write_zeroes": true, 00:08:53.827 "zcopy": true, 00:08:53.827 "get_zone_info": false, 00:08:53.827 "zone_management": false, 00:08:53.827 "zone_append": false, 00:08:53.827 "compare": false, 00:08:53.827 "compare_and_write": false, 00:08:53.827 "abort": true, 00:08:53.827 "seek_hole": false, 00:08:53.827 "seek_data": false, 00:08:53.827 "copy": true, 00:08:53.827 "nvme_iov_md": false 00:08:53.827 }, 00:08:53.827 "memory_domains": [ 00:08:53.827 { 00:08:53.827 "dma_device_id": "system", 00:08:53.827 "dma_device_type": 1 00:08:53.827 }, 00:08:53.827 { 00:08:53.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.827 "dma_device_type": 2 00:08:53.827 } 00:08:53.827 ], 00:08:53.827 "driver_specific": {} 00:08:53.827 } 00:08:53.827 ]' 00:08:53.827 08:05:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:53.827 08:05:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:53.827 08:05:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:53.827 08:05:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.827 08:05:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:53.827 08:05:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.827 08:05:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:53.827 08:05:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.827 08:05:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:53.827 08:05:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.827 08:05:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:53.827 08:05:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:53.827 08:05:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:53.827 00:08:53.827 real 0m0.157s 00:08:53.827 user 0m0.096s 00:08:53.827 sys 0m0.024s 00:08:53.827 08:05:58 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.827 08:05:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:53.827 ************************************ 00:08:53.827 END TEST rpc_plugins 00:08:53.827 ************************************ 00:08:54.088 08:05:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:54.088 08:05:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.088 08:05:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.088 08:05:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.088 ************************************ 00:08:54.088 START TEST rpc_trace_cmd_test 00:08:54.088 ************************************ 00:08:54.088 08:05:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:54.088 08:05:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:54.088 08:05:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:54.088 08:05:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.088 08:05:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.088 08:05:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.088 08:05:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:54.088 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1748828", 00:08:54.088 "tpoint_group_mask": "0x8", 00:08:54.088 "iscsi_conn": { 00:08:54.088 "mask": "0x2", 00:08:54.088 "tpoint_mask": "0x0" 00:08:54.088 }, 00:08:54.088 "scsi": { 00:08:54.088 "mask": "0x4", 00:08:54.088 "tpoint_mask": "0x0" 00:08:54.088 }, 00:08:54.088 "bdev": { 00:08:54.088 "mask": "0x8", 00:08:54.088 "tpoint_mask": "0xffffffffffffffff" 00:08:54.088 }, 00:08:54.088 "nvmf_rdma": { 00:08:54.088 "mask": "0x10", 00:08:54.088 "tpoint_mask": "0x0" 00:08:54.088 }, 00:08:54.089 "nvmf_tcp": { 00:08:54.089 "mask": "0x20", 00:08:54.089 "tpoint_mask": "0x0" 00:08:54.089 }, 00:08:54.089 "ftl": { 00:08:54.089 "mask": "0x40", 00:08:54.089 "tpoint_mask": "0x0" 00:08:54.089 }, 00:08:54.089 "blobfs": { 00:08:54.089 "mask": "0x80", 00:08:54.089 "tpoint_mask": "0x0" 00:08:54.089 }, 00:08:54.089 "dsa": { 00:08:54.089 "mask": "0x200", 00:08:54.089 "tpoint_mask": "0x0" 00:08:54.089 }, 00:08:54.089 "thread": { 00:08:54.089 "mask": "0x400", 00:08:54.089 "tpoint_mask": "0x0" 00:08:54.089 }, 00:08:54.089 "nvme_pcie": { 00:08:54.089 "mask": "0x800", 00:08:54.089 "tpoint_mask": "0x0" 00:08:54.089 }, 00:08:54.089 "iaa": { 00:08:54.089 "mask": "0x1000", 00:08:54.089 "tpoint_mask": "0x0" 00:08:54.089 }, 00:08:54.089 "nvme_tcp": { 00:08:54.089 "mask": "0x2000", 00:08:54.089 "tpoint_mask": "0x0" 00:08:54.089 }, 00:08:54.089 "bdev_nvme": { 00:08:54.089 "mask": "0x4000", 00:08:54.089 "tpoint_mask": "0x0" 00:08:54.089 }, 00:08:54.089 "sock": { 00:08:54.089 "mask": "0x8000", 00:08:54.089 "tpoint_mask": "0x0" 00:08:54.089 }, 00:08:54.089 "blob": { 00:08:54.089 "mask": "0x10000", 00:08:54.089 "tpoint_mask": "0x0" 00:08:54.089 }, 00:08:54.089 "bdev_raid": { 00:08:54.089 "mask": "0x20000", 00:08:54.089 "tpoint_mask": "0x0" 00:08:54.089 }, 00:08:54.089 "scheduler": { 00:08:54.089 "mask": "0x40000", 00:08:54.089 "tpoint_mask": "0x0" 00:08:54.089 } 00:08:54.089 }' 00:08:54.089 08:05:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:54.089 08:05:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:54.089 08:05:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:54.089 08:05:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:54.089 08:05:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:54.089 08:05:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:54.089 08:05:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:54.350 08:05:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:54.350 08:05:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:54.350 08:05:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:54.350 00:08:54.350 real 0m0.252s 00:08:54.350 user 0m0.209s 00:08:54.350 sys 0m0.032s 00:08:54.350 08:05:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.350 08:05:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.350 ************************************ 00:08:54.350 END TEST rpc_trace_cmd_test 00:08:54.350 ************************************ 00:08:54.350 08:05:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:54.350 08:05:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:54.350 08:05:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:54.350 08:05:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.350 08:05:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.350 08:05:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.350 ************************************ 00:08:54.350 START TEST rpc_daemon_integrity 00:08:54.350 ************************************ 00:08:54.350 08:05:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:54.350 08:05:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:54.350 08:05:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.350 08:05:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:54.350 08:05:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.350 08:05:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:54.350 08:05:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:54.350 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:54.350 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:54.350 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.350 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:54.350 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.350 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:54.350 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:54.350 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.350 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:54.350 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.350 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:54.350 { 00:08:54.350 "name": "Malloc2", 00:08:54.350 "aliases": [ 00:08:54.350 "ef421801-5b3c-42e2-b825-abba3e1d1c8e" 00:08:54.350 ], 00:08:54.350 "product_name": "Malloc disk", 00:08:54.350 "block_size": 512, 00:08:54.350 "num_blocks": 16384, 00:08:54.350 "uuid": "ef421801-5b3c-42e2-b825-abba3e1d1c8e", 00:08:54.350 "assigned_rate_limits": { 00:08:54.350 "rw_ios_per_sec": 0, 00:08:54.350 "rw_mbytes_per_sec": 0, 00:08:54.350 "r_mbytes_per_sec": 0, 00:08:54.350 "w_mbytes_per_sec": 0 00:08:54.350 }, 00:08:54.350 "claimed": false, 00:08:54.350 "zoned": false, 00:08:54.350 "supported_io_types": { 00:08:54.350 "read": true, 00:08:54.350 "write": true, 00:08:54.350 "unmap": true, 00:08:54.350 "flush": true, 00:08:54.350 "reset": true, 00:08:54.350 "nvme_admin": false, 00:08:54.350 "nvme_io": false, 00:08:54.350 "nvme_io_md": false, 00:08:54.350 "write_zeroes": true, 00:08:54.350 "zcopy": true, 00:08:54.350 "get_zone_info": false, 00:08:54.350 "zone_management": false, 00:08:54.350 "zone_append": false, 00:08:54.350 "compare": false, 00:08:54.350 "compare_and_write": false, 00:08:54.350 "abort": true, 00:08:54.350 "seek_hole": false, 00:08:54.350 "seek_data": false, 00:08:54.350 "copy": true, 00:08:54.350 "nvme_iov_md": false 00:08:54.350 }, 00:08:54.350 "memory_domains": [ 00:08:54.350 { 00:08:54.350 "dma_device_id": "system", 00:08:54.350 "dma_device_type": 1 00:08:54.350 }, 00:08:54.350 { 00:08:54.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.350 "dma_device_type": 2 00:08:54.350 } 00:08:54.350 ], 00:08:54.350 "driver_specific": {} 00:08:54.350 } 00:08:54.350 ]' 00:08:54.350 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:54.616 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:54.616 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:54.616 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.616 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:54.616 [2024-11-20 08:05:59.100156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:54.616 [2024-11-20 08:05:59.100186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.616 [2024-11-20 08:05:59.100200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x732380 00:08:54.616 [2024-11-20 08:05:59.100207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.616 [2024-11-20 08:05:59.101471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.616 [2024-11-20 08:05:59.101491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:54.616 Passthru0 00:08:54.616 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.616 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:54.616 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.616 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:54.616 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.616 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:54.616 { 00:08:54.616 "name": "Malloc2", 00:08:54.617 "aliases": [ 00:08:54.617 "ef421801-5b3c-42e2-b825-abba3e1d1c8e" 00:08:54.617 ], 00:08:54.617 "product_name": "Malloc disk", 00:08:54.617 "block_size": 512, 00:08:54.617 "num_blocks": 16384, 00:08:54.617 "uuid": "ef421801-5b3c-42e2-b825-abba3e1d1c8e", 00:08:54.617 "assigned_rate_limits": { 00:08:54.617 "rw_ios_per_sec": 0, 00:08:54.617 "rw_mbytes_per_sec": 0, 00:08:54.617 "r_mbytes_per_sec": 0, 00:08:54.617 "w_mbytes_per_sec": 0 00:08:54.617 }, 00:08:54.617 "claimed": true, 00:08:54.617 "claim_type": "exclusive_write", 00:08:54.617 "zoned": false, 00:08:54.617 "supported_io_types": { 00:08:54.617 "read": true, 00:08:54.617 "write": true, 00:08:54.617 "unmap": true, 00:08:54.617 "flush": true, 00:08:54.617 "reset": true, 00:08:54.617 "nvme_admin": false, 00:08:54.617 "nvme_io": false, 00:08:54.617 "nvme_io_md": false, 00:08:54.617 "write_zeroes": true, 00:08:54.617 "zcopy": true, 00:08:54.617 "get_zone_info": false, 00:08:54.617 "zone_management": false, 00:08:54.617 "zone_append": false, 00:08:54.617 "compare": false, 00:08:54.617 "compare_and_write": false, 00:08:54.618 "abort": true, 00:08:54.618 "seek_hole": false, 00:08:54.618 "seek_data": false, 00:08:54.618 "copy": true, 00:08:54.618 "nvme_iov_md": false 00:08:54.618 }, 00:08:54.618 "memory_domains": [ 00:08:54.618 { 00:08:54.618 "dma_device_id": "system", 00:08:54.618 "dma_device_type": 1 00:08:54.618 }, 00:08:54.618 { 00:08:54.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.618 "dma_device_type": 2 00:08:54.618 } 00:08:54.618 ], 00:08:54.618 "driver_specific": {} 00:08:54.618 }, 00:08:54.618 { 00:08:54.618 "name": "Passthru0", 00:08:54.618 "aliases": [ 00:08:54.618 "99a06c56-332a-5d65-b878-3e72ef6bc6fe" 00:08:54.618 ], 00:08:54.618 "product_name": "passthru", 00:08:54.618 "block_size": 512, 00:08:54.618 "num_blocks": 16384, 00:08:54.618 "uuid": "99a06c56-332a-5d65-b878-3e72ef6bc6fe", 00:08:54.618 "assigned_rate_limits": { 00:08:54.618 "rw_ios_per_sec": 0, 00:08:54.618 "rw_mbytes_per_sec": 0, 00:08:54.618 "r_mbytes_per_sec": 0, 00:08:54.618 "w_mbytes_per_sec": 0 00:08:54.618 }, 00:08:54.618 "claimed": false, 00:08:54.618 "zoned": false, 00:08:54.618 "supported_io_types": { 00:08:54.618 "read": true, 00:08:54.618 "write": true, 00:08:54.618 "unmap": true, 00:08:54.618 "flush": true, 00:08:54.618 "reset": true, 00:08:54.618 "nvme_admin": false, 00:08:54.618 "nvme_io": false, 00:08:54.619 "nvme_io_md": false, 00:08:54.619 "write_zeroes": true, 00:08:54.619 "zcopy": true, 00:08:54.619 "get_zone_info": false, 00:08:54.619 "zone_management": false, 00:08:54.619 "zone_append": false, 00:08:54.619 "compare": false, 00:08:54.619 "compare_and_write": false, 00:08:54.619 "abort": true, 00:08:54.619 "seek_hole": false, 00:08:54.619 "seek_data": false, 00:08:54.619 "copy": true, 00:08:54.619 "nvme_iov_md": false 00:08:54.619 }, 00:08:54.619 "memory_domains": [ 00:08:54.619 { 00:08:54.619 "dma_device_id": "system", 00:08:54.619 "dma_device_type": 1 00:08:54.619 }, 00:08:54.619 { 00:08:54.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.619 "dma_device_type": 2 00:08:54.619 } 00:08:54.619 ], 00:08:54.619 "driver_specific": { 00:08:54.619 "passthru": { 00:08:54.619 "name": "Passthru0", 00:08:54.619 "base_bdev_name": "Malloc2" 00:08:54.619 } 00:08:54.619 } 00:08:54.619 } 00:08:54.619 ]' 00:08:54.619 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:54.619 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:54.619 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:54.619 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.619 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:54.619 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.619 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:54.619 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.619 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:54.621 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.621 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:54.621 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.621 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:54.621 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.621 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:54.621 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:54.621 08:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:54.621 00:08:54.621 real 0m0.306s 00:08:54.621 user 0m0.199s 00:08:54.621 sys 0m0.039s 00:08:54.621 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.621 08:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:54.621 ************************************ 00:08:54.621 END TEST rpc_daemon_integrity 00:08:54.621 ************************************ 00:08:54.621 08:05:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:54.621 08:05:59 rpc -- rpc/rpc.sh@84 -- # killprocess 1748828 00:08:54.621 08:05:59 rpc -- common/autotest_common.sh@954 -- # '[' -z 1748828 ']' 00:08:54.621 08:05:59 rpc -- common/autotest_common.sh@958 -- # kill -0 1748828 00:08:54.621 08:05:59 rpc -- common/autotest_common.sh@959 -- # uname 00:08:54.622 08:05:59 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.622 08:05:59 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1748828 00:08:54.886 08:05:59 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.886 08:05:59 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.886 08:05:59 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1748828' 00:08:54.886 killing process with pid 1748828 00:08:54.886 08:05:59 rpc -- common/autotest_common.sh@973 -- # kill 1748828 00:08:54.886 08:05:59 rpc -- common/autotest_common.sh@978 -- # wait 1748828 00:08:54.886 00:08:54.886 real 0m2.635s 00:08:54.886 user 0m3.438s 00:08:54.886 sys 0m0.737s 00:08:54.886 08:05:59 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.886 08:05:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.886 ************************************ 00:08:54.886 END TEST rpc 00:08:54.887 ************************************ 00:08:54.887 08:05:59 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:54.887 08:05:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.887 08:05:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.887 08:05:59 -- common/autotest_common.sh@10 -- # set +x 00:08:55.148 ************************************ 00:08:55.148 START TEST skip_rpc 00:08:55.148 ************************************ 00:08:55.148 08:05:59 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:55.148 * Looking for test storage... 00:08:55.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:55.148 08:05:59 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:55.148 08:05:59 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.148 08:05:59 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:55.148 08:05:59 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.148 08:05:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:55.148 08:05:59 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.148 08:05:59 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.148 --rc genhtml_branch_coverage=1 00:08:55.148 --rc genhtml_function_coverage=1 00:08:55.148 --rc genhtml_legend=1 00:08:55.148 --rc geninfo_all_blocks=1 00:08:55.148 --rc geninfo_unexecuted_blocks=1 00:08:55.148 00:08:55.148 ' 00:08:55.148 08:05:59 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.148 --rc genhtml_branch_coverage=1 00:08:55.148 --rc genhtml_function_coverage=1 00:08:55.148 --rc genhtml_legend=1 00:08:55.148 --rc geninfo_all_blocks=1 00:08:55.148 --rc geninfo_unexecuted_blocks=1 00:08:55.148 00:08:55.148 ' 00:08:55.148 08:05:59 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.148 --rc genhtml_branch_coverage=1 00:08:55.148 --rc genhtml_function_coverage=1 00:08:55.148 --rc genhtml_legend=1 00:08:55.148 --rc geninfo_all_blocks=1 00:08:55.148 --rc geninfo_unexecuted_blocks=1 00:08:55.148 00:08:55.148 ' 00:08:55.148 08:05:59 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.148 --rc genhtml_branch_coverage=1 00:08:55.148 --rc genhtml_function_coverage=1 00:08:55.148 --rc genhtml_legend=1 00:08:55.148 --rc geninfo_all_blocks=1 00:08:55.148 --rc geninfo_unexecuted_blocks=1 00:08:55.148 00:08:55.148 ' 00:08:55.148 08:05:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:55.148 08:05:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:55.148 08:05:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:55.148 08:05:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.148 08:05:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.148 08:05:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.148 ************************************ 00:08:55.148 START TEST skip_rpc 00:08:55.148 ************************************ 00:08:55.148 08:05:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:55.148 08:05:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1749463 00:08:55.148 08:05:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:55.148 08:05:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:55.148 08:05:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:55.408 [2024-11-20 08:05:59.923234] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:08:55.408 [2024-11-20 08:05:59.923294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749463 ] 00:08:55.408 [2024-11-20 08:06:00.006541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.408 [2024-11-20 08:06:00.053185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1749463 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1749463 ']' 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1749463 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1749463 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1749463' 00:09:00.705 killing process with pid 1749463 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1749463 00:09:00.705 08:06:04 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1749463 00:09:00.705 00:09:00.705 real 0m5.276s 00:09:00.705 user 0m5.059s 00:09:00.705 sys 0m0.251s 00:09:00.705 08:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.705 08:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.705 ************************************ 00:09:00.705 END TEST skip_rpc 00:09:00.705 ************************************ 00:09:00.705 08:06:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:00.705 08:06:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.705 08:06:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.705 08:06:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.705 ************************************ 00:09:00.705 START TEST skip_rpc_with_json 00:09:00.705 ************************************ 00:09:00.705 08:06:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:00.705 08:06:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:00.705 08:06:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1750680 00:09:00.706 08:06:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:00.706 08:06:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1750680 00:09:00.706 08:06:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1750680 ']' 00:09:00.706 08:06:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.706 08:06:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.706 08:06:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.706 08:06:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.706 08:06:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:00.706 08:06:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:00.706 [2024-11-20 08:06:05.270237] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:00.706 [2024-11-20 08:06:05.270287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750680 ] 00:09:00.706 [2024-11-20 08:06:05.348121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.706 [2024-11-20 08:06:05.385830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.733 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.733 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:01.733 08:06:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:01.733 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.733 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:01.733 [2024-11-20 08:06:06.051486] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:01.733 request: 00:09:01.733 { 00:09:01.733 "trtype": "tcp", 00:09:01.733 "method": "nvmf_get_transports", 00:09:01.733 "req_id": 1 00:09:01.733 } 00:09:01.733 Got JSON-RPC error response 00:09:01.733 response: 00:09:01.733 { 00:09:01.733 "code": -19, 00:09:01.733 "message": "No such device" 00:09:01.733 } 00:09:01.733 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:01.733 08:06:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:01.733 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.733 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:01.733 [2024-11-20 08:06:06.059600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.733 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.733 08:06:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:01.733 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.733 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:01.733 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.733 08:06:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:01.733 { 00:09:01.733 "subsystems": [ 00:09:01.733 { 00:09:01.733 "subsystem": "fsdev", 00:09:01.733 "config": [ 00:09:01.733 { 00:09:01.733 "method": "fsdev_set_opts", 00:09:01.733 "params": { 00:09:01.733 "fsdev_io_pool_size": 65535, 00:09:01.733 "fsdev_io_cache_size": 256 00:09:01.733 } 00:09:01.733 } 00:09:01.733 ] 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "subsystem": "vfio_user_target", 00:09:01.733 "config": null 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "subsystem": "keyring", 00:09:01.733 "config": [] 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "subsystem": "iobuf", 00:09:01.733 "config": [ 00:09:01.733 { 00:09:01.733 "method": "iobuf_set_options", 00:09:01.733 "params": { 00:09:01.733 "small_pool_count": 8192, 00:09:01.733 "large_pool_count": 1024, 00:09:01.733 "small_bufsize": 8192, 00:09:01.733 "large_bufsize": 135168, 00:09:01.733 "enable_numa": false 00:09:01.733 } 00:09:01.733 } 00:09:01.733 ] 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "subsystem": "sock", 00:09:01.733 "config": [ 00:09:01.733 { 00:09:01.733 "method": "sock_set_default_impl", 00:09:01.733 "params": { 00:09:01.733 "impl_name": "posix" 00:09:01.733 } 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "method": "sock_impl_set_options", 00:09:01.733 "params": { 00:09:01.733 "impl_name": "ssl", 00:09:01.733 "recv_buf_size": 4096, 00:09:01.733 "send_buf_size": 4096, 00:09:01.733 "enable_recv_pipe": true, 00:09:01.733 "enable_quickack": false, 00:09:01.733 "enable_placement_id": 0, 00:09:01.733 "enable_zerocopy_send_server": true, 00:09:01.733 "enable_zerocopy_send_client": false, 00:09:01.733 "zerocopy_threshold": 0, 00:09:01.733 "tls_version": 0, 00:09:01.733 "enable_ktls": false 00:09:01.733 } 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "method": "sock_impl_set_options", 00:09:01.733 "params": { 00:09:01.733 "impl_name": "posix", 00:09:01.733 "recv_buf_size": 2097152, 00:09:01.733 "send_buf_size": 2097152, 00:09:01.733 "enable_recv_pipe": true, 00:09:01.733 "enable_quickack": false, 00:09:01.733 "enable_placement_id": 0, 00:09:01.733 "enable_zerocopy_send_server": true, 00:09:01.733 "enable_zerocopy_send_client": false, 00:09:01.733 "zerocopy_threshold": 0, 00:09:01.733 "tls_version": 0, 00:09:01.733 "enable_ktls": false 00:09:01.733 } 00:09:01.733 } 00:09:01.733 ] 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "subsystem": "vmd", 00:09:01.733 "config": [] 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "subsystem": "accel", 00:09:01.733 "config": [ 00:09:01.733 { 00:09:01.733 "method": "accel_set_options", 00:09:01.733 "params": { 00:09:01.733 "small_cache_size": 128, 00:09:01.733 "large_cache_size": 16, 00:09:01.733 "task_count": 2048, 00:09:01.733 "sequence_count": 2048, 00:09:01.733 "buf_count": 2048 00:09:01.733 } 00:09:01.733 } 00:09:01.733 ] 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "subsystem": "bdev", 00:09:01.733 "config": [ 00:09:01.733 { 00:09:01.733 "method": "bdev_set_options", 00:09:01.733 "params": { 00:09:01.733 "bdev_io_pool_size": 65535, 00:09:01.733 "bdev_io_cache_size": 256, 00:09:01.733 "bdev_auto_examine": true, 00:09:01.733 "iobuf_small_cache_size": 128, 00:09:01.733 "iobuf_large_cache_size": 16 00:09:01.733 } 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "method": "bdev_raid_set_options", 00:09:01.733 "params": { 00:09:01.733 "process_window_size_kb": 1024, 00:09:01.733 "process_max_bandwidth_mb_sec": 0 00:09:01.733 } 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "method": "bdev_iscsi_set_options", 00:09:01.733 "params": { 00:09:01.733 "timeout_sec": 30 00:09:01.733 } 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "method": "bdev_nvme_set_options", 00:09:01.733 "params": { 00:09:01.733 "action_on_timeout": "none", 00:09:01.733 "timeout_us": 0, 00:09:01.733 "timeout_admin_us": 0, 00:09:01.733 "keep_alive_timeout_ms": 10000, 00:09:01.733 "arbitration_burst": 0, 00:09:01.733 "low_priority_weight": 0, 00:09:01.733 "medium_priority_weight": 0, 00:09:01.733 "high_priority_weight": 0, 00:09:01.733 "nvme_adminq_poll_period_us": 10000, 00:09:01.733 "nvme_ioq_poll_period_us": 0, 00:09:01.733 "io_queue_requests": 0, 00:09:01.733 "delay_cmd_submit": true, 00:09:01.733 "transport_retry_count": 4, 00:09:01.733 "bdev_retry_count": 3, 00:09:01.733 "transport_ack_timeout": 0, 00:09:01.733 "ctrlr_loss_timeout_sec": 0, 00:09:01.733 "reconnect_delay_sec": 0, 00:09:01.733 "fast_io_fail_timeout_sec": 0, 00:09:01.733 "disable_auto_failback": false, 00:09:01.733 "generate_uuids": false, 00:09:01.733 "transport_tos": 0, 00:09:01.733 "nvme_error_stat": false, 00:09:01.733 "rdma_srq_size": 0, 00:09:01.733 "io_path_stat": false, 00:09:01.733 "allow_accel_sequence": false, 00:09:01.733 "rdma_max_cq_size": 0, 00:09:01.733 "rdma_cm_event_timeout_ms": 0, 00:09:01.733 "dhchap_digests": [ 00:09:01.733 "sha256", 00:09:01.733 "sha384", 00:09:01.733 "sha512" 00:09:01.733 ], 00:09:01.733 "dhchap_dhgroups": [ 00:09:01.733 "null", 00:09:01.733 "ffdhe2048", 00:09:01.733 "ffdhe3072", 00:09:01.733 "ffdhe4096", 00:09:01.733 "ffdhe6144", 00:09:01.733 "ffdhe8192" 00:09:01.733 ] 00:09:01.733 } 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "method": "bdev_nvme_set_hotplug", 00:09:01.733 "params": { 00:09:01.733 "period_us": 100000, 00:09:01.733 "enable": false 00:09:01.733 } 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "method": "bdev_wait_for_examine" 00:09:01.733 } 00:09:01.733 ] 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "subsystem": "scsi", 00:09:01.733 "config": null 00:09:01.733 }, 00:09:01.734 { 00:09:01.734 "subsystem": "scheduler", 00:09:01.734 "config": [ 00:09:01.734 { 00:09:01.734 "method": "framework_set_scheduler", 00:09:01.734 "params": { 00:09:01.734 "name": "static" 00:09:01.734 } 00:09:01.734 } 00:09:01.734 ] 00:09:01.734 }, 00:09:01.734 { 00:09:01.734 "subsystem": "vhost_scsi", 00:09:01.734 "config": [] 00:09:01.734 }, 00:09:01.734 { 00:09:01.734 "subsystem": "vhost_blk", 00:09:01.734 "config": [] 00:09:01.734 }, 00:09:01.734 { 00:09:01.734 "subsystem": "ublk", 00:09:01.734 "config": [] 00:09:01.734 }, 00:09:01.734 { 00:09:01.734 "subsystem": "nbd", 00:09:01.734 "config": [] 00:09:01.734 }, 00:09:01.734 { 00:09:01.734 "subsystem": "nvmf", 00:09:01.734 "config": [ 00:09:01.734 { 00:09:01.734 "method": "nvmf_set_config", 00:09:01.734 "params": { 00:09:01.734 "discovery_filter": "match_any", 00:09:01.734 "admin_cmd_passthru": { 00:09:01.734 "identify_ctrlr": false 00:09:01.734 }, 00:09:01.734 "dhchap_digests": [ 00:09:01.734 "sha256", 00:09:01.734 "sha384", 00:09:01.734 "sha512" 00:09:01.734 ], 00:09:01.734 "dhchap_dhgroups": [ 00:09:01.734 "null", 00:09:01.734 "ffdhe2048", 00:09:01.734 "ffdhe3072", 00:09:01.734 "ffdhe4096", 00:09:01.734 "ffdhe6144", 00:09:01.734 "ffdhe8192" 00:09:01.734 ] 00:09:01.734 } 00:09:01.734 }, 00:09:01.734 { 00:09:01.734 "method": "nvmf_set_max_subsystems", 00:09:01.734 "params": { 00:09:01.734 "max_subsystems": 1024 00:09:01.734 } 00:09:01.734 }, 00:09:01.734 { 00:09:01.734 "method": "nvmf_set_crdt", 00:09:01.734 "params": { 00:09:01.734 "crdt1": 0, 00:09:01.734 "crdt2": 0, 00:09:01.734 "crdt3": 0 00:09:01.734 } 00:09:01.734 }, 00:09:01.734 { 00:09:01.734 "method": "nvmf_create_transport", 00:09:01.734 "params": { 00:09:01.734 "trtype": "TCP", 00:09:01.734 "max_queue_depth": 128, 00:09:01.734 "max_io_qpairs_per_ctrlr": 127, 00:09:01.734 "in_capsule_data_size": 4096, 00:09:01.734 "max_io_size": 131072, 00:09:01.734 "io_unit_size": 131072, 00:09:01.734 "max_aq_depth": 128, 00:09:01.734 "num_shared_buffers": 511, 00:09:01.734 "buf_cache_size": 4294967295, 00:09:01.734 "dif_insert_or_strip": false, 00:09:01.734 "zcopy": false, 00:09:01.734 "c2h_success": true, 00:09:01.734 "sock_priority": 0, 00:09:01.734 "abort_timeout_sec": 1, 00:09:01.734 "ack_timeout": 0, 00:09:01.734 "data_wr_pool_size": 0 00:09:01.734 } 00:09:01.734 } 00:09:01.734 ] 00:09:01.734 }, 00:09:01.734 { 00:09:01.734 "subsystem": "iscsi", 00:09:01.734 "config": [ 00:09:01.734 { 00:09:01.734 "method": "iscsi_set_options", 00:09:01.734 "params": { 00:09:01.734 "node_base": "iqn.2016-06.io.spdk", 00:09:01.734 "max_sessions": 128, 00:09:01.734 "max_connections_per_session": 2, 00:09:01.734 "max_queue_depth": 64, 00:09:01.734 "default_time2wait": 2, 00:09:01.734 "default_time2retain": 20, 00:09:01.734 "first_burst_length": 8192, 00:09:01.734 "immediate_data": true, 00:09:01.734 "allow_duplicated_isid": false, 00:09:01.734 "error_recovery_level": 0, 00:09:01.734 "nop_timeout": 60, 00:09:01.734 "nop_in_interval": 30, 00:09:01.734 "disable_chap": false, 00:09:01.734 "require_chap": false, 00:09:01.734 "mutual_chap": false, 00:09:01.734 "chap_group": 0, 00:09:01.734 "max_large_datain_per_connection": 64, 00:09:01.734 "max_r2t_per_connection": 4, 00:09:01.734 "pdu_pool_size": 36864, 00:09:01.734 "immediate_data_pool_size": 16384, 00:09:01.734 "data_out_pool_size": 2048 00:09:01.734 } 00:09:01.734 } 00:09:01.734 ] 00:09:01.734 } 00:09:01.734 ] 00:09:01.734 } 00:09:01.734 08:06:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:01.734 08:06:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1750680 00:09:01.734 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1750680 ']' 00:09:01.734 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1750680 00:09:01.734 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:01.734 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.734 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1750680 00:09:01.734 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.734 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.734 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1750680' 00:09:01.734 killing process with pid 1750680 00:09:01.734 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1750680 00:09:01.734 08:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1750680 00:09:02.018 08:06:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1750959 00:09:02.018 08:06:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:02.018 08:06:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:07.305 08:06:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1750959 00:09:07.305 08:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1750959 ']' 00:09:07.305 08:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1750959 00:09:07.305 08:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:07.305 08:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.305 08:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1750959 00:09:07.305 08:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.305 08:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.305 08:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1750959' 00:09:07.305 killing process with pid 1750959 00:09:07.305 08:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1750959 00:09:07.305 08:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1750959 00:09:07.305 08:06:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:07.305 08:06:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:07.305 00:09:07.305 real 0m6.546s 00:09:07.305 user 0m6.417s 00:09:07.305 sys 0m0.539s 00:09:07.305 08:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.305 08:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:07.305 ************************************ 00:09:07.306 END TEST skip_rpc_with_json 00:09:07.306 ************************************ 00:09:07.306 08:06:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:07.306 08:06:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.306 08:06:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.306 08:06:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.306 ************************************ 00:09:07.306 START TEST skip_rpc_with_delay 00:09:07.306 ************************************ 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:07.306 [2024-11-20 08:06:11.887619] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:07.306 00:09:07.306 real 0m0.072s 00:09:07.306 user 0m0.045s 00:09:07.306 sys 0m0.026s 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.306 08:06:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:07.306 ************************************ 00:09:07.306 END TEST skip_rpc_with_delay 00:09:07.306 ************************************ 00:09:07.306 08:06:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:07.306 08:06:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:07.306 08:06:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:07.306 08:06:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.306 08:06:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.306 08:06:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.306 ************************************ 00:09:07.306 START TEST exit_on_failed_rpc_init 00:09:07.306 ************************************ 00:09:07.306 08:06:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:07.306 08:06:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1752546 00:09:07.306 08:06:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1752546 00:09:07.306 08:06:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1752546 ']' 00:09:07.306 08:06:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.306 08:06:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.306 08:06:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.306 08:06:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.306 08:06:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:07.306 08:06:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:07.567 [2024-11-20 08:06:12.044026] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:07.567 [2024-11-20 08:06:12.044092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752546 ] 00:09:07.567 [2024-11-20 08:06:12.126855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.567 [2024-11-20 08:06:12.169049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.140 08:06:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.140 08:06:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:08.140 08:06:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:08.140 08:06:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:08.140 08:06:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:08.140 08:06:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:08.140 08:06:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:08.140 08:06:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.140 08:06:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:08.140 08:06:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.140 08:06:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:08.140 08:06:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.140 08:06:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:08.140 08:06:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:09:08.140 08:06:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:08.428 [2024-11-20 08:06:12.882225] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:08.428 [2024-11-20 08:06:12.882279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752805 ] 00:09:08.428 [2024-11-20 08:06:12.974535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.428 [2024-11-20 08:06:13.010310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.428 [2024-11-20 08:06:13.010357] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:08.428 [2024-11-20 08:06:13.010367] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:08.428 [2024-11-20 08:06:13.010375] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1752546 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1752546 ']' 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1752546 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1752546 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1752546' 00:09:08.428 killing process with pid 1752546 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1752546 00:09:08.428 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1752546 00:09:08.689 00:09:08.689 real 0m1.338s 00:09:08.689 user 0m1.549s 00:09:08.689 sys 0m0.385s 00:09:08.689 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.689 08:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:08.689 ************************************ 00:09:08.689 END TEST exit_on_failed_rpc_init 00:09:08.689 ************************************ 00:09:08.689 08:06:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:08.689 00:09:08.689 real 0m13.706s 00:09:08.689 user 0m13.282s 00:09:08.689 sys 0m1.490s 00:09:08.689 08:06:13 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.689 08:06:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.689 ************************************ 00:09:08.689 END TEST skip_rpc 00:09:08.689 ************************************ 00:09:08.689 08:06:13 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:08.689 08:06:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.689 08:06:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.689 08:06:13 -- common/autotest_common.sh@10 -- # set +x 00:09:08.949 ************************************ 00:09:08.949 START TEST rpc_client 00:09:08.949 ************************************ 00:09:08.949 08:06:13 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:08.949 * Looking for test storage... 00:09:08.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:09:08.949 08:06:13 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:08.949 08:06:13 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:09:08.949 08:06:13 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:08.949 08:06:13 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:08.949 08:06:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.949 08:06:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.950 08:06:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:08.950 08:06:13 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.950 08:06:13 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:08.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.950 --rc genhtml_branch_coverage=1 00:09:08.950 --rc genhtml_function_coverage=1 00:09:08.950 --rc genhtml_legend=1 00:09:08.950 --rc geninfo_all_blocks=1 00:09:08.950 --rc geninfo_unexecuted_blocks=1 00:09:08.950 00:09:08.950 ' 00:09:08.950 08:06:13 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:08.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.950 --rc genhtml_branch_coverage=1 00:09:08.950 --rc genhtml_function_coverage=1 00:09:08.950 --rc genhtml_legend=1 00:09:08.950 --rc geninfo_all_blocks=1 00:09:08.950 --rc geninfo_unexecuted_blocks=1 00:09:08.950 00:09:08.950 ' 00:09:08.950 08:06:13 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:08.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.950 --rc genhtml_branch_coverage=1 00:09:08.950 --rc genhtml_function_coverage=1 00:09:08.950 --rc genhtml_legend=1 00:09:08.950 --rc geninfo_all_blocks=1 00:09:08.950 --rc geninfo_unexecuted_blocks=1 00:09:08.950 00:09:08.950 ' 00:09:08.950 08:06:13 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:08.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.950 --rc genhtml_branch_coverage=1 00:09:08.950 --rc genhtml_function_coverage=1 00:09:08.950 --rc genhtml_legend=1 00:09:08.950 --rc geninfo_all_blocks=1 00:09:08.950 --rc geninfo_unexecuted_blocks=1 00:09:08.950 00:09:08.950 ' 00:09:08.950 08:06:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:09:08.950 OK 00:09:08.950 08:06:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:08.950 00:09:08.950 real 0m0.223s 00:09:08.950 user 0m0.138s 00:09:08.950 sys 0m0.097s 00:09:08.950 08:06:13 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.950 08:06:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:08.950 ************************************ 00:09:08.950 END TEST rpc_client 00:09:08.950 ************************************ 00:09:09.212 08:06:13 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:09:09.212 08:06:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.212 08:06:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.212 08:06:13 -- common/autotest_common.sh@10 -- # set +x 00:09:09.212 ************************************ 00:09:09.212 START TEST json_config 00:09:09.212 ************************************ 00:09:09.212 08:06:13 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:09:09.212 08:06:13 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.212 08:06:13 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.212 08:06:13 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:09.212 08:06:13 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:09.212 08:06:13 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.212 08:06:13 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.212 08:06:13 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.212 08:06:13 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.212 08:06:13 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.212 08:06:13 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.212 08:06:13 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.212 08:06:13 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.212 08:06:13 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.212 08:06:13 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.212 08:06:13 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.212 08:06:13 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:09.212 08:06:13 json_config -- scripts/common.sh@345 -- # : 1 00:09:09.212 08:06:13 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.212 08:06:13 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.212 08:06:13 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:09.212 08:06:13 json_config -- scripts/common.sh@353 -- # local d=1 00:09:09.212 08:06:13 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.212 08:06:13 json_config -- scripts/common.sh@355 -- # echo 1 00:09:09.212 08:06:13 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.212 08:06:13 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:09.212 08:06:13 json_config -- scripts/common.sh@353 -- # local d=2 00:09:09.212 08:06:13 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.212 08:06:13 json_config -- scripts/common.sh@355 -- # echo 2 00:09:09.212 08:06:13 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.212 08:06:13 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.212 08:06:13 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.212 08:06:13 json_config -- scripts/common.sh@368 -- # return 0 00:09:09.212 08:06:13 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.212 08:06:13 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:09.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.212 --rc genhtml_branch_coverage=1 00:09:09.212 --rc genhtml_function_coverage=1 00:09:09.212 --rc genhtml_legend=1 00:09:09.212 --rc geninfo_all_blocks=1 00:09:09.212 --rc geninfo_unexecuted_blocks=1 00:09:09.212 00:09:09.212 ' 00:09:09.212 08:06:13 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:09.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.212 --rc genhtml_branch_coverage=1 00:09:09.212 --rc genhtml_function_coverage=1 00:09:09.212 --rc genhtml_legend=1 00:09:09.212 --rc geninfo_all_blocks=1 00:09:09.212 --rc geninfo_unexecuted_blocks=1 00:09:09.212 00:09:09.212 ' 00:09:09.212 08:06:13 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:09.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.212 --rc genhtml_branch_coverage=1 00:09:09.212 --rc genhtml_function_coverage=1 00:09:09.212 --rc genhtml_legend=1 00:09:09.212 --rc geninfo_all_blocks=1 00:09:09.212 --rc geninfo_unexecuted_blocks=1 00:09:09.212 00:09:09.212 ' 00:09:09.212 08:06:13 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:09.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.212 --rc genhtml_branch_coverage=1 00:09:09.212 --rc genhtml_function_coverage=1 00:09:09.212 --rc genhtml_legend=1 00:09:09.212 --rc geninfo_all_blocks=1 00:09:09.212 --rc geninfo_unexecuted_blocks=1 00:09:09.212 00:09:09.212 ' 00:09:09.212 08:06:13 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.212 08:06:13 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.212 08:06:13 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.212 08:06:13 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.212 08:06:13 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.212 08:06:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.212 08:06:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.212 08:06:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.212 08:06:13 json_config -- paths/export.sh@5 -- # export PATH 00:09:09.212 08:06:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:09.212 08:06:13 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:09.212 08:06:13 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:09.212 08:06:13 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@50 -- # : 0 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:09.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:09.212 08:06:13 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:09.213 08:06:13 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:09.213 08:06:13 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:09:09.213 INFO: JSON configuration test init 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:09:09.213 08:06:13 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:09:09.213 08:06:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.474 08:06:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:09.474 08:06:13 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:09:09.474 08:06:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.474 08:06:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:09.474 08:06:13 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:09:09.474 08:06:13 json_config -- json_config/common.sh@9 -- # local app=target 00:09:09.474 08:06:13 json_config -- json_config/common.sh@10 -- # shift 00:09:09.474 08:06:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:09.474 08:06:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:09.474 08:06:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:09.474 08:06:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:09.474 08:06:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:09.474 08:06:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1753154 00:09:09.474 08:06:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:09.474 Waiting for target to run... 00:09:09.474 08:06:13 json_config -- json_config/common.sh@25 -- # waitforlisten 1753154 /var/tmp/spdk_tgt.sock 00:09:09.474 08:06:13 json_config -- common/autotest_common.sh@835 -- # '[' -z 1753154 ']' 00:09:09.474 08:06:13 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:09.474 08:06:13 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:09.474 08:06:13 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.474 08:06:13 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:09.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:09.474 08:06:13 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.474 08:06:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:09.474 [2024-11-20 08:06:14.013052] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:09.474 [2024-11-20 08:06:14.013122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753154 ] 00:09:09.734 [2024-11-20 08:06:14.416422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.734 [2024-11-20 08:06:14.445805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.304 08:06:14 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.304 08:06:14 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:10.304 08:06:14 json_config -- json_config/common.sh@26 -- # echo '' 00:09:10.304 00:09:10.304 08:06:14 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:09:10.304 08:06:14 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:09:10.304 08:06:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:10.304 08:06:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:10.304 08:06:14 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:09:10.304 08:06:14 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:09:10.304 08:06:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:10.304 08:06:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:10.305 08:06:14 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:10.305 08:06:14 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:09:10.305 08:06:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:10.874 08:06:15 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:09:10.874 08:06:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:10.874 08:06:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:10.874 08:06:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:10.874 08:06:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:10.874 08:06:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:10.874 08:06:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:10.874 08:06:15 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:09:10.874 08:06:15 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:09:10.874 08:06:15 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:10.874 08:06:15 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:10.874 08:06:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:11.134 08:06:15 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@51 -- # local get_types 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@54 -- # sort 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:09:11.135 08:06:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:11.135 08:06:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@62 -- # return 0 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:09:11.135 08:06:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:11.135 08:06:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:11.135 08:06:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:11.135 MallocForNvmf0 00:09:11.135 08:06:15 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:11.135 08:06:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:11.395 MallocForNvmf1 00:09:11.395 08:06:16 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:11.395 08:06:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:11.656 [2024-11-20 08:06:16.174171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.656 08:06:16 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:11.656 08:06:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:11.656 08:06:16 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:11.656 08:06:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:11.916 08:06:16 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:11.916 08:06:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:12.177 08:06:16 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:12.177 08:06:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:12.177 [2024-11-20 08:06:16.832315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:12.177 08:06:16 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:09:12.177 08:06:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:12.177 08:06:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:12.177 08:06:16 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:09:12.177 08:06:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:12.177 08:06:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:12.437 08:06:16 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:09:12.437 08:06:16 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:12.437 08:06:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:12.437 MallocBdevForConfigChangeCheck 00:09:12.437 08:06:17 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:09:12.437 08:06:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:12.437 08:06:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:12.437 08:06:17 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:09:12.437 08:06:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:13.006 08:06:17 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:09:13.006 INFO: shutting down applications... 00:09:13.006 08:06:17 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:09:13.006 08:06:17 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:09:13.006 08:06:17 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:09:13.006 08:06:17 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:13.266 Calling clear_iscsi_subsystem 00:09:13.266 Calling clear_nvmf_subsystem 00:09:13.266 Calling clear_nbd_subsystem 00:09:13.266 Calling clear_ublk_subsystem 00:09:13.266 Calling clear_vhost_blk_subsystem 00:09:13.266 Calling clear_vhost_scsi_subsystem 00:09:13.266 Calling clear_bdev_subsystem 00:09:13.266 08:06:17 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:09:13.266 08:06:17 json_config -- json_config/json_config.sh@350 -- # count=100 00:09:13.266 08:06:17 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:09:13.266 08:06:17 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:13.266 08:06:17 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:13.266 08:06:17 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:09:13.525 08:06:18 json_config -- json_config/json_config.sh@352 -- # break 00:09:13.525 08:06:18 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:09:13.525 08:06:18 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:09:13.525 08:06:18 json_config -- json_config/common.sh@31 -- # local app=target 00:09:13.525 08:06:18 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:13.525 08:06:18 json_config -- json_config/common.sh@35 -- # [[ -n 1753154 ]] 00:09:13.525 08:06:18 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1753154 00:09:13.525 08:06:18 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:13.525 08:06:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:13.525 08:06:18 json_config -- json_config/common.sh@41 -- # kill -0 1753154 00:09:13.525 08:06:18 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:14.096 08:06:18 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:14.096 08:06:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:14.096 08:06:18 json_config -- json_config/common.sh@41 -- # kill -0 1753154 00:09:14.096 08:06:18 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:14.096 08:06:18 json_config -- json_config/common.sh@43 -- # break 00:09:14.096 08:06:18 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:14.096 08:06:18 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:14.096 SPDK target shutdown done 00:09:14.096 08:06:18 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:09:14.096 INFO: relaunching applications... 00:09:14.096 08:06:18 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:14.096 08:06:18 json_config -- json_config/common.sh@9 -- # local app=target 00:09:14.096 08:06:18 json_config -- json_config/common.sh@10 -- # shift 00:09:14.096 08:06:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:14.097 08:06:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:14.097 08:06:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:14.097 08:06:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:14.097 08:06:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:14.097 08:06:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1754147 00:09:14.097 08:06:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:14.097 Waiting for target to run... 00:09:14.097 08:06:18 json_config -- json_config/common.sh@25 -- # waitforlisten 1754147 /var/tmp/spdk_tgt.sock 00:09:14.097 08:06:18 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:14.097 08:06:18 json_config -- common/autotest_common.sh@835 -- # '[' -z 1754147 ']' 00:09:14.097 08:06:18 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:14.097 08:06:18 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.097 08:06:18 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:14.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:14.097 08:06:18 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.097 08:06:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:14.097 [2024-11-20 08:06:18.798504] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:14.097 [2024-11-20 08:06:18.798571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754147 ] 00:09:14.667 [2024-11-20 08:06:19.113169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.667 [2024-11-20 08:06:19.143083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.236 [2024-11-20 08:06:19.668480] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.236 [2024-11-20 08:06:19.700882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:15.236 08:06:19 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.236 08:06:19 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:15.236 08:06:19 json_config -- json_config/common.sh@26 -- # echo '' 00:09:15.236 00:09:15.236 08:06:19 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:09:15.236 08:06:19 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:15.236 INFO: Checking if target configuration is the same... 00:09:15.236 08:06:19 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:15.236 08:06:19 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:09:15.236 08:06:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:15.236 + '[' 2 -ne 2 ']' 00:09:15.236 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:15.236 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:15.236 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:15.236 +++ basename /dev/fd/62 00:09:15.236 ++ mktemp /tmp/62.XXX 00:09:15.236 + tmp_file_1=/tmp/62.ipb 00:09:15.236 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:15.237 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:15.237 + tmp_file_2=/tmp/spdk_tgt_config.json.iSm 00:09:15.237 + ret=0 00:09:15.237 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:15.495 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:15.495 + diff -u /tmp/62.ipb /tmp/spdk_tgt_config.json.iSm 00:09:15.495 + echo 'INFO: JSON config files are the same' 00:09:15.495 INFO: JSON config files are the same 00:09:15.495 + rm /tmp/62.ipb /tmp/spdk_tgt_config.json.iSm 00:09:15.495 + exit 0 00:09:15.495 08:06:20 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:09:15.495 08:06:20 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:15.495 INFO: changing configuration and checking if this can be detected... 00:09:15.495 08:06:20 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:15.495 08:06:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:15.754 08:06:20 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:15.754 08:06:20 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:09:15.754 08:06:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:15.754 + '[' 2 -ne 2 ']' 00:09:15.754 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:15.754 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:15.754 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:15.754 +++ basename /dev/fd/62 00:09:15.754 ++ mktemp /tmp/62.XXX 00:09:15.754 + tmp_file_1=/tmp/62.p5m 00:09:15.754 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:15.754 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:15.754 + tmp_file_2=/tmp/spdk_tgt_config.json.1Ie 00:09:15.754 + ret=0 00:09:15.754 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:16.014 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:16.014 + diff -u /tmp/62.p5m /tmp/spdk_tgt_config.json.1Ie 00:09:16.014 + ret=1 00:09:16.014 + echo '=== Start of file: /tmp/62.p5m ===' 00:09:16.014 + cat /tmp/62.p5m 00:09:16.014 + echo '=== End of file: /tmp/62.p5m ===' 00:09:16.014 + echo '' 00:09:16.014 + echo '=== Start of file: /tmp/spdk_tgt_config.json.1Ie ===' 00:09:16.014 + cat /tmp/spdk_tgt_config.json.1Ie 00:09:16.014 + echo '=== End of file: /tmp/spdk_tgt_config.json.1Ie ===' 00:09:16.014 + echo '' 00:09:16.014 + rm /tmp/62.p5m /tmp/spdk_tgt_config.json.1Ie 00:09:16.014 + exit 1 00:09:16.014 08:06:20 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:09:16.014 INFO: configuration change detected. 00:09:16.014 08:06:20 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:09:16.014 08:06:20 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:09:16.014 08:06:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.014 08:06:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:16.014 08:06:20 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:09:16.014 08:06:20 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:09:16.014 08:06:20 json_config -- json_config/json_config.sh@324 -- # [[ -n 1754147 ]] 00:09:16.014 08:06:20 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:09:16.014 08:06:20 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:09:16.014 08:06:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.014 08:06:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:16.014 08:06:20 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:09:16.014 08:06:20 json_config -- json_config/json_config.sh@200 -- # uname -s 00:09:16.014 08:06:20 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:09:16.014 08:06:20 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:09:16.014 08:06:20 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:09:16.014 08:06:20 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:09:16.014 08:06:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:16.014 08:06:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:16.274 08:06:20 json_config -- json_config/json_config.sh@330 -- # killprocess 1754147 00:09:16.274 08:06:20 json_config -- common/autotest_common.sh@954 -- # '[' -z 1754147 ']' 00:09:16.274 08:06:20 json_config -- common/autotest_common.sh@958 -- # kill -0 1754147 00:09:16.274 08:06:20 json_config -- common/autotest_common.sh@959 -- # uname 00:09:16.274 08:06:20 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.274 08:06:20 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1754147 00:09:16.275 08:06:20 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.275 08:06:20 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.275 08:06:20 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1754147' 00:09:16.275 killing process with pid 1754147 00:09:16.275 08:06:20 json_config -- common/autotest_common.sh@973 -- # kill 1754147 00:09:16.275 08:06:20 json_config -- common/autotest_common.sh@978 -- # wait 1754147 00:09:16.535 08:06:21 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:16.535 08:06:21 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:09:16.535 08:06:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:16.535 08:06:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:16.535 08:06:21 json_config -- json_config/json_config.sh@335 -- # return 0 00:09:16.535 08:06:21 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:09:16.535 INFO: Success 00:09:16.535 00:09:16.535 real 0m7.423s 00:09:16.535 user 0m8.777s 00:09:16.535 sys 0m2.133s 00:09:16.535 08:06:21 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.535 08:06:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:16.535 ************************************ 00:09:16.535 END TEST json_config 00:09:16.535 ************************************ 00:09:16.535 08:06:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:16.535 08:06:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.535 08:06:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.535 08:06:21 -- common/autotest_common.sh@10 -- # set +x 00:09:16.535 ************************************ 00:09:16.535 START TEST json_config_extra_key 00:09:16.535 ************************************ 00:09:16.535 08:06:21 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:16.796 08:06:21 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.796 08:06:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.796 08:06:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.796 08:06:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.796 08:06:21 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:16.796 08:06:21 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.796 08:06:21 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.796 --rc genhtml_branch_coverage=1 00:09:16.796 --rc genhtml_function_coverage=1 00:09:16.796 --rc genhtml_legend=1 00:09:16.796 --rc geninfo_all_blocks=1 00:09:16.796 --rc geninfo_unexecuted_blocks=1 00:09:16.796 00:09:16.796 ' 00:09:16.796 08:06:21 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.796 --rc genhtml_branch_coverage=1 00:09:16.796 --rc genhtml_function_coverage=1 00:09:16.796 --rc genhtml_legend=1 00:09:16.796 --rc geninfo_all_blocks=1 00:09:16.796 --rc geninfo_unexecuted_blocks=1 00:09:16.796 00:09:16.796 ' 00:09:16.796 08:06:21 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.796 --rc genhtml_branch_coverage=1 00:09:16.796 --rc genhtml_function_coverage=1 00:09:16.796 --rc genhtml_legend=1 00:09:16.796 --rc geninfo_all_blocks=1 00:09:16.796 --rc geninfo_unexecuted_blocks=1 00:09:16.796 00:09:16.796 ' 00:09:16.797 08:06:21 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.797 --rc genhtml_branch_coverage=1 00:09:16.797 --rc genhtml_function_coverage=1 00:09:16.797 --rc genhtml_legend=1 00:09:16.797 --rc geninfo_all_blocks=1 00:09:16.797 --rc geninfo_unexecuted_blocks=1 00:09:16.797 00:09:16.797 ' 00:09:16.797 08:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.797 08:06:21 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.797 08:06:21 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.797 08:06:21 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.797 08:06:21 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.797 08:06:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.797 08:06:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.797 08:06:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.797 08:06:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:16.797 08:06:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:16.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:16.797 08:06:21 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:16.797 08:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:16.797 08:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:16.797 08:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:16.797 08:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:16.797 08:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:16.797 08:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:16.797 08:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:16.797 08:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:09:16.797 08:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:16.797 08:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:16.797 08:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:16.797 INFO: launching applications... 00:09:16.797 08:06:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:16.797 08:06:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:16.797 08:06:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:16.797 08:06:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:16.797 08:06:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:16.797 08:06:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:16.797 08:06:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:16.797 08:06:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:16.797 08:06:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1754869 00:09:16.797 08:06:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:16.797 Waiting for target to run... 00:09:16.797 08:06:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1754869 /var/tmp/spdk_tgt.sock 00:09:16.797 08:06:21 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1754869 ']' 00:09:16.797 08:06:21 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:16.797 08:06:21 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:16.797 08:06:21 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.797 08:06:21 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:16.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:16.797 08:06:21 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.797 08:06:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:16.797 [2024-11-20 08:06:21.496969] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:16.797 [2024-11-20 08:06:21.497051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754869 ] 00:09:17.367 [2024-11-20 08:06:21.824221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.367 [2024-11-20 08:06:21.857240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.627 08:06:22 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.627 08:06:22 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:17.627 08:06:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:17.627 00:09:17.627 08:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:17.627 INFO: shutting down applications... 00:09:17.627 08:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:17.627 08:06:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:17.627 08:06:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:17.627 08:06:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1754869 ]] 00:09:17.627 08:06:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1754869 00:09:17.627 08:06:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:17.627 08:06:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:17.627 08:06:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1754869 00:09:17.627 08:06:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:18.198 08:06:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:18.198 08:06:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:18.198 08:06:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1754869 00:09:18.198 08:06:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:18.198 08:06:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:18.198 08:06:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:18.198 08:06:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:18.198 SPDK target shutdown done 00:09:18.198 08:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:18.198 Success 00:09:18.198 00:09:18.198 real 0m1.566s 00:09:18.198 user 0m1.148s 00:09:18.198 sys 0m0.464s 00:09:18.198 08:06:22 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.198 08:06:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:18.198 ************************************ 00:09:18.198 END TEST json_config_extra_key 00:09:18.198 ************************************ 00:09:18.198 08:06:22 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:18.198 08:06:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.198 08:06:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.198 08:06:22 -- common/autotest_common.sh@10 -- # set +x 00:09:18.198 ************************************ 00:09:18.198 START TEST alias_rpc 00:09:18.198 ************************************ 00:09:18.198 08:06:22 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:18.459 * Looking for test storage... 00:09:18.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:09:18.459 08:06:22 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:18.459 08:06:22 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:18.459 08:06:22 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:18.459 08:06:23 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.459 08:06:23 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:18.459 08:06:23 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.459 08:06:23 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:18.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.459 --rc genhtml_branch_coverage=1 00:09:18.459 --rc genhtml_function_coverage=1 00:09:18.459 --rc genhtml_legend=1 00:09:18.459 --rc geninfo_all_blocks=1 00:09:18.459 --rc geninfo_unexecuted_blocks=1 00:09:18.459 00:09:18.459 ' 00:09:18.459 08:06:23 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:18.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.459 --rc genhtml_branch_coverage=1 00:09:18.459 --rc genhtml_function_coverage=1 00:09:18.459 --rc genhtml_legend=1 00:09:18.459 --rc geninfo_all_blocks=1 00:09:18.459 --rc geninfo_unexecuted_blocks=1 00:09:18.459 00:09:18.459 ' 00:09:18.459 08:06:23 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:18.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.459 --rc genhtml_branch_coverage=1 00:09:18.459 --rc genhtml_function_coverage=1 00:09:18.459 --rc genhtml_legend=1 00:09:18.459 --rc geninfo_all_blocks=1 00:09:18.459 --rc geninfo_unexecuted_blocks=1 00:09:18.459 00:09:18.459 ' 00:09:18.459 08:06:23 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:18.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.459 --rc genhtml_branch_coverage=1 00:09:18.459 --rc genhtml_function_coverage=1 00:09:18.459 --rc genhtml_legend=1 00:09:18.459 --rc geninfo_all_blocks=1 00:09:18.459 --rc geninfo_unexecuted_blocks=1 00:09:18.459 00:09:18.459 ' 00:09:18.459 08:06:23 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:18.459 08:06:23 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1755264 00:09:18.459 08:06:23 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1755264 00:09:18.459 08:06:23 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:18.459 08:06:23 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1755264 ']' 00:09:18.459 08:06:23 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.459 08:06:23 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.459 08:06:23 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.459 08:06:23 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.459 08:06:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.459 [2024-11-20 08:06:23.135414] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:18.459 [2024-11-20 08:06:23.135484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755264 ] 00:09:18.719 [2024-11-20 08:06:23.221049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.719 [2024-11-20 08:06:23.262309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.289 08:06:23 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.289 08:06:23 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:19.289 08:06:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:09:19.550 08:06:24 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1755264 00:09:19.550 08:06:24 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1755264 ']' 00:09:19.550 08:06:24 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1755264 00:09:19.550 08:06:24 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:19.550 08:06:24 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.550 08:06:24 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1755264 00:09:19.550 08:06:24 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.550 08:06:24 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.550 08:06:24 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1755264' 00:09:19.550 killing process with pid 1755264 00:09:19.550 08:06:24 alias_rpc -- common/autotest_common.sh@973 -- # kill 1755264 00:09:19.550 08:06:24 alias_rpc -- common/autotest_common.sh@978 -- # wait 1755264 00:09:19.811 00:09:19.811 real 0m1.543s 00:09:19.811 user 0m1.718s 00:09:19.811 sys 0m0.417s 00:09:19.811 08:06:24 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.811 08:06:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.811 ************************************ 00:09:19.811 END TEST alias_rpc 00:09:19.811 ************************************ 00:09:19.811 08:06:24 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:19.811 08:06:24 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:19.811 08:06:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.811 08:06:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.811 08:06:24 -- common/autotest_common.sh@10 -- # set +x 00:09:19.811 ************************************ 00:09:19.811 START TEST spdkcli_tcp 00:09:19.811 ************************************ 00:09:19.811 08:06:24 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:20.072 * Looking for test storage... 00:09:20.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:09:20.072 08:06:24 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:20.072 08:06:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:20.072 08:06:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:20.072 08:06:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.072 08:06:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:20.072 08:06:24 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.072 08:06:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.072 --rc genhtml_branch_coverage=1 00:09:20.072 --rc genhtml_function_coverage=1 00:09:20.072 --rc genhtml_legend=1 00:09:20.072 --rc geninfo_all_blocks=1 00:09:20.072 --rc geninfo_unexecuted_blocks=1 00:09:20.072 00:09:20.072 ' 00:09:20.072 08:06:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.072 --rc genhtml_branch_coverage=1 00:09:20.072 --rc genhtml_function_coverage=1 00:09:20.072 --rc genhtml_legend=1 00:09:20.072 --rc geninfo_all_blocks=1 00:09:20.072 --rc geninfo_unexecuted_blocks=1 00:09:20.072 00:09:20.072 ' 00:09:20.072 08:06:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.072 --rc genhtml_branch_coverage=1 00:09:20.072 --rc genhtml_function_coverage=1 00:09:20.072 --rc genhtml_legend=1 00:09:20.072 --rc geninfo_all_blocks=1 00:09:20.072 --rc geninfo_unexecuted_blocks=1 00:09:20.072 00:09:20.072 ' 00:09:20.072 08:06:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.072 --rc genhtml_branch_coverage=1 00:09:20.072 --rc genhtml_function_coverage=1 00:09:20.072 --rc genhtml_legend=1 00:09:20.072 --rc geninfo_all_blocks=1 00:09:20.072 --rc geninfo_unexecuted_blocks=1 00:09:20.072 00:09:20.072 ' 00:09:20.072 08:06:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:09:20.072 08:06:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:09:20.072 08:06:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:09:20.072 08:06:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:20.072 08:06:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:20.072 08:06:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:20.072 08:06:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:20.072 08:06:24 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:20.072 08:06:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.072 08:06:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1755663 00:09:20.072 08:06:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1755663 00:09:20.072 08:06:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:20.072 08:06:24 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1755663 ']' 00:09:20.072 08:06:24 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.072 08:06:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.073 08:06:24 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.073 08:06:24 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.073 08:06:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.073 [2024-11-20 08:06:24.757551] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:20.073 [2024-11-20 08:06:24.757608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755663 ] 00:09:20.333 [2024-11-20 08:06:24.836850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:20.333 [2024-11-20 08:06:24.874107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.333 [2024-11-20 08:06:24.874198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.904 08:06:25 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.904 08:06:25 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:20.904 08:06:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:20.904 08:06:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1755739 00:09:20.904 08:06:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:21.164 [ 00:09:21.164 "bdev_malloc_delete", 00:09:21.164 "bdev_malloc_create", 00:09:21.164 "bdev_null_resize", 00:09:21.164 "bdev_null_delete", 00:09:21.164 "bdev_null_create", 00:09:21.164 "bdev_nvme_cuse_unregister", 00:09:21.164 "bdev_nvme_cuse_register", 00:09:21.164 "bdev_opal_new_user", 00:09:21.164 "bdev_opal_set_lock_state", 00:09:21.164 "bdev_opal_delete", 00:09:21.164 "bdev_opal_get_info", 00:09:21.164 "bdev_opal_create", 00:09:21.164 "bdev_nvme_opal_revert", 00:09:21.164 "bdev_nvme_opal_init", 00:09:21.164 "bdev_nvme_send_cmd", 00:09:21.164 "bdev_nvme_set_keys", 00:09:21.164 "bdev_nvme_get_path_iostat", 00:09:21.164 "bdev_nvme_get_mdns_discovery_info", 00:09:21.164 "bdev_nvme_stop_mdns_discovery", 00:09:21.164 "bdev_nvme_start_mdns_discovery", 00:09:21.164 "bdev_nvme_set_multipath_policy", 00:09:21.164 "bdev_nvme_set_preferred_path", 00:09:21.165 "bdev_nvme_get_io_paths", 00:09:21.165 "bdev_nvme_remove_error_injection", 00:09:21.165 "bdev_nvme_add_error_injection", 00:09:21.165 "bdev_nvme_get_discovery_info", 00:09:21.165 "bdev_nvme_stop_discovery", 00:09:21.165 "bdev_nvme_start_discovery", 00:09:21.165 "bdev_nvme_get_controller_health_info", 00:09:21.165 "bdev_nvme_disable_controller", 00:09:21.165 "bdev_nvme_enable_controller", 00:09:21.165 "bdev_nvme_reset_controller", 00:09:21.165 "bdev_nvme_get_transport_statistics", 00:09:21.165 "bdev_nvme_apply_firmware", 00:09:21.165 "bdev_nvme_detach_controller", 00:09:21.165 "bdev_nvme_get_controllers", 00:09:21.165 "bdev_nvme_attach_controller", 00:09:21.165 "bdev_nvme_set_hotplug", 00:09:21.165 "bdev_nvme_set_options", 00:09:21.165 "bdev_passthru_delete", 00:09:21.165 "bdev_passthru_create", 00:09:21.165 "bdev_lvol_set_parent_bdev", 00:09:21.165 "bdev_lvol_set_parent", 00:09:21.165 "bdev_lvol_check_shallow_copy", 00:09:21.165 "bdev_lvol_start_shallow_copy", 00:09:21.165 "bdev_lvol_grow_lvstore", 00:09:21.165 "bdev_lvol_get_lvols", 00:09:21.165 "bdev_lvol_get_lvstores", 00:09:21.165 "bdev_lvol_delete", 00:09:21.165 "bdev_lvol_set_read_only", 00:09:21.165 "bdev_lvol_resize", 00:09:21.165 "bdev_lvol_decouple_parent", 00:09:21.165 "bdev_lvol_inflate", 00:09:21.165 "bdev_lvol_rename", 00:09:21.165 "bdev_lvol_clone_bdev", 00:09:21.165 "bdev_lvol_clone", 00:09:21.165 "bdev_lvol_snapshot", 00:09:21.165 "bdev_lvol_create", 00:09:21.165 "bdev_lvol_delete_lvstore", 00:09:21.165 "bdev_lvol_rename_lvstore", 00:09:21.165 "bdev_lvol_create_lvstore", 00:09:21.165 "bdev_raid_set_options", 00:09:21.165 "bdev_raid_remove_base_bdev", 00:09:21.165 "bdev_raid_add_base_bdev", 00:09:21.165 "bdev_raid_delete", 00:09:21.165 "bdev_raid_create", 00:09:21.165 "bdev_raid_get_bdevs", 00:09:21.165 "bdev_error_inject_error", 00:09:21.165 "bdev_error_delete", 00:09:21.165 "bdev_error_create", 00:09:21.165 "bdev_split_delete", 00:09:21.165 "bdev_split_create", 00:09:21.165 "bdev_delay_delete", 00:09:21.165 "bdev_delay_create", 00:09:21.165 "bdev_delay_update_latency", 00:09:21.165 "bdev_zone_block_delete", 00:09:21.165 "bdev_zone_block_create", 00:09:21.165 "blobfs_create", 00:09:21.165 "blobfs_detect", 00:09:21.165 "blobfs_set_cache_size", 00:09:21.165 "bdev_aio_delete", 00:09:21.165 "bdev_aio_rescan", 00:09:21.165 "bdev_aio_create", 00:09:21.165 "bdev_ftl_set_property", 00:09:21.165 "bdev_ftl_get_properties", 00:09:21.165 "bdev_ftl_get_stats", 00:09:21.165 "bdev_ftl_unmap", 00:09:21.165 "bdev_ftl_unload", 00:09:21.165 "bdev_ftl_delete", 00:09:21.165 "bdev_ftl_load", 00:09:21.165 "bdev_ftl_create", 00:09:21.165 "bdev_virtio_attach_controller", 00:09:21.165 "bdev_virtio_scsi_get_devices", 00:09:21.165 "bdev_virtio_detach_controller", 00:09:21.165 "bdev_virtio_blk_set_hotplug", 00:09:21.165 "bdev_iscsi_delete", 00:09:21.165 "bdev_iscsi_create", 00:09:21.165 "bdev_iscsi_set_options", 00:09:21.165 "accel_error_inject_error", 00:09:21.165 "ioat_scan_accel_module", 00:09:21.165 "dsa_scan_accel_module", 00:09:21.165 "iaa_scan_accel_module", 00:09:21.165 "vfu_virtio_create_fs_endpoint", 00:09:21.165 "vfu_virtio_create_scsi_endpoint", 00:09:21.165 "vfu_virtio_scsi_remove_target", 00:09:21.165 "vfu_virtio_scsi_add_target", 00:09:21.165 "vfu_virtio_create_blk_endpoint", 00:09:21.165 "vfu_virtio_delete_endpoint", 00:09:21.165 "keyring_file_remove_key", 00:09:21.165 "keyring_file_add_key", 00:09:21.165 "keyring_linux_set_options", 00:09:21.165 "fsdev_aio_delete", 00:09:21.165 "fsdev_aio_create", 00:09:21.165 "iscsi_get_histogram", 00:09:21.165 "iscsi_enable_histogram", 00:09:21.165 "iscsi_set_options", 00:09:21.165 "iscsi_get_auth_groups", 00:09:21.165 "iscsi_auth_group_remove_secret", 00:09:21.165 "iscsi_auth_group_add_secret", 00:09:21.165 "iscsi_delete_auth_group", 00:09:21.165 "iscsi_create_auth_group", 00:09:21.165 "iscsi_set_discovery_auth", 00:09:21.165 "iscsi_get_options", 00:09:21.165 "iscsi_target_node_request_logout", 00:09:21.165 "iscsi_target_node_set_redirect", 00:09:21.165 "iscsi_target_node_set_auth", 00:09:21.165 "iscsi_target_node_add_lun", 00:09:21.165 "iscsi_get_stats", 00:09:21.165 "iscsi_get_connections", 00:09:21.165 "iscsi_portal_group_set_auth", 00:09:21.165 "iscsi_start_portal_group", 00:09:21.165 "iscsi_delete_portal_group", 00:09:21.165 "iscsi_create_portal_group", 00:09:21.165 "iscsi_get_portal_groups", 00:09:21.165 "iscsi_delete_target_node", 00:09:21.165 "iscsi_target_node_remove_pg_ig_maps", 00:09:21.165 "iscsi_target_node_add_pg_ig_maps", 00:09:21.165 "iscsi_create_target_node", 00:09:21.165 "iscsi_get_target_nodes", 00:09:21.165 "iscsi_delete_initiator_group", 00:09:21.165 "iscsi_initiator_group_remove_initiators", 00:09:21.165 "iscsi_initiator_group_add_initiators", 00:09:21.165 "iscsi_create_initiator_group", 00:09:21.165 "iscsi_get_initiator_groups", 00:09:21.165 "nvmf_set_crdt", 00:09:21.165 "nvmf_set_config", 00:09:21.165 "nvmf_set_max_subsystems", 00:09:21.165 "nvmf_stop_mdns_prr", 00:09:21.165 "nvmf_publish_mdns_prr", 00:09:21.165 "nvmf_subsystem_get_listeners", 00:09:21.165 "nvmf_subsystem_get_qpairs", 00:09:21.165 "nvmf_subsystem_get_controllers", 00:09:21.165 "nvmf_get_stats", 00:09:21.165 "nvmf_get_transports", 00:09:21.165 "nvmf_create_transport", 00:09:21.165 "nvmf_get_targets", 00:09:21.165 "nvmf_delete_target", 00:09:21.165 "nvmf_create_target", 00:09:21.165 "nvmf_subsystem_allow_any_host", 00:09:21.165 "nvmf_subsystem_set_keys", 00:09:21.165 "nvmf_subsystem_remove_host", 00:09:21.165 "nvmf_subsystem_add_host", 00:09:21.165 "nvmf_ns_remove_host", 00:09:21.165 "nvmf_ns_add_host", 00:09:21.165 "nvmf_subsystem_remove_ns", 00:09:21.165 "nvmf_subsystem_set_ns_ana_group", 00:09:21.165 "nvmf_subsystem_add_ns", 00:09:21.165 "nvmf_subsystem_listener_set_ana_state", 00:09:21.165 "nvmf_discovery_get_referrals", 00:09:21.165 "nvmf_discovery_remove_referral", 00:09:21.165 "nvmf_discovery_add_referral", 00:09:21.165 "nvmf_subsystem_remove_listener", 00:09:21.165 "nvmf_subsystem_add_listener", 00:09:21.165 "nvmf_delete_subsystem", 00:09:21.165 "nvmf_create_subsystem", 00:09:21.165 "nvmf_get_subsystems", 00:09:21.165 "env_dpdk_get_mem_stats", 00:09:21.165 "nbd_get_disks", 00:09:21.165 "nbd_stop_disk", 00:09:21.165 "nbd_start_disk", 00:09:21.165 "ublk_recover_disk", 00:09:21.165 "ublk_get_disks", 00:09:21.165 "ublk_stop_disk", 00:09:21.165 "ublk_start_disk", 00:09:21.165 "ublk_destroy_target", 00:09:21.165 "ublk_create_target", 00:09:21.165 "virtio_blk_create_transport", 00:09:21.165 "virtio_blk_get_transports", 00:09:21.165 "vhost_controller_set_coalescing", 00:09:21.165 "vhost_get_controllers", 00:09:21.165 "vhost_delete_controller", 00:09:21.165 "vhost_create_blk_controller", 00:09:21.165 "vhost_scsi_controller_remove_target", 00:09:21.165 "vhost_scsi_controller_add_target", 00:09:21.165 "vhost_start_scsi_controller", 00:09:21.165 "vhost_create_scsi_controller", 00:09:21.165 "thread_set_cpumask", 00:09:21.165 "scheduler_set_options", 00:09:21.165 "framework_get_governor", 00:09:21.165 "framework_get_scheduler", 00:09:21.165 "framework_set_scheduler", 00:09:21.165 "framework_get_reactors", 00:09:21.165 "thread_get_io_channels", 00:09:21.165 "thread_get_pollers", 00:09:21.165 "thread_get_stats", 00:09:21.165 "framework_monitor_context_switch", 00:09:21.165 "spdk_kill_instance", 00:09:21.165 "log_enable_timestamps", 00:09:21.165 "log_get_flags", 00:09:21.165 "log_clear_flag", 00:09:21.165 "log_set_flag", 00:09:21.165 "log_get_level", 00:09:21.165 "log_set_level", 00:09:21.165 "log_get_print_level", 00:09:21.165 "log_set_print_level", 00:09:21.165 "framework_enable_cpumask_locks", 00:09:21.165 "framework_disable_cpumask_locks", 00:09:21.165 "framework_wait_init", 00:09:21.165 "framework_start_init", 00:09:21.165 "scsi_get_devices", 00:09:21.165 "bdev_get_histogram", 00:09:21.165 "bdev_enable_histogram", 00:09:21.165 "bdev_set_qos_limit", 00:09:21.165 "bdev_set_qd_sampling_period", 00:09:21.165 "bdev_get_bdevs", 00:09:21.165 "bdev_reset_iostat", 00:09:21.165 "bdev_get_iostat", 00:09:21.165 "bdev_examine", 00:09:21.165 "bdev_wait_for_examine", 00:09:21.165 "bdev_set_options", 00:09:21.165 "accel_get_stats", 00:09:21.165 "accel_set_options", 00:09:21.165 "accel_set_driver", 00:09:21.165 "accel_crypto_key_destroy", 00:09:21.165 "accel_crypto_keys_get", 00:09:21.165 "accel_crypto_key_create", 00:09:21.165 "accel_assign_opc", 00:09:21.165 "accel_get_module_info", 00:09:21.165 "accel_get_opc_assignments", 00:09:21.165 "vmd_rescan", 00:09:21.165 "vmd_remove_device", 00:09:21.165 "vmd_enable", 00:09:21.165 "sock_get_default_impl", 00:09:21.165 "sock_set_default_impl", 00:09:21.165 "sock_impl_set_options", 00:09:21.165 "sock_impl_get_options", 00:09:21.165 "iobuf_get_stats", 00:09:21.165 "iobuf_set_options", 00:09:21.165 "keyring_get_keys", 00:09:21.166 "vfu_tgt_set_base_path", 00:09:21.166 "framework_get_pci_devices", 00:09:21.166 "framework_get_config", 00:09:21.166 "framework_get_subsystems", 00:09:21.166 "fsdev_set_opts", 00:09:21.166 "fsdev_get_opts", 00:09:21.166 "trace_get_info", 00:09:21.166 "trace_get_tpoint_group_mask", 00:09:21.166 "trace_disable_tpoint_group", 00:09:21.166 "trace_enable_tpoint_group", 00:09:21.166 "trace_clear_tpoint_mask", 00:09:21.166 "trace_set_tpoint_mask", 00:09:21.166 "notify_get_notifications", 00:09:21.166 "notify_get_types", 00:09:21.166 "spdk_get_version", 00:09:21.166 "rpc_get_methods" 00:09:21.166 ] 00:09:21.166 08:06:25 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:21.166 08:06:25 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:21.166 08:06:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:21.166 08:06:25 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:21.166 08:06:25 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1755663 00:09:21.166 08:06:25 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1755663 ']' 00:09:21.166 08:06:25 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1755663 00:09:21.166 08:06:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:21.166 08:06:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.166 08:06:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1755663 00:09:21.166 08:06:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.166 08:06:25 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.166 08:06:25 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1755663' 00:09:21.166 killing process with pid 1755663 00:09:21.166 08:06:25 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1755663 00:09:21.166 08:06:25 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1755663 00:09:21.425 00:09:21.425 real 0m1.546s 00:09:21.425 user 0m2.814s 00:09:21.425 sys 0m0.461s 00:09:21.425 08:06:26 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.425 08:06:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:21.425 ************************************ 00:09:21.425 END TEST spdkcli_tcp 00:09:21.425 ************************************ 00:09:21.425 08:06:26 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:21.425 08:06:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.425 08:06:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.425 08:06:26 -- common/autotest_common.sh@10 -- # set +x 00:09:21.425 ************************************ 00:09:21.425 START TEST dpdk_mem_utility 00:09:21.425 ************************************ 00:09:21.425 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:21.685 * Looking for test storage... 00:09:21.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:09:21.685 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:21.685 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:09:21.685 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:21.685 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.686 08:06:26 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:21.686 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.686 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:21.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.686 --rc genhtml_branch_coverage=1 00:09:21.686 --rc genhtml_function_coverage=1 00:09:21.686 --rc genhtml_legend=1 00:09:21.686 --rc geninfo_all_blocks=1 00:09:21.686 --rc geninfo_unexecuted_blocks=1 00:09:21.686 00:09:21.686 ' 00:09:21.686 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:21.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.686 --rc genhtml_branch_coverage=1 00:09:21.686 --rc genhtml_function_coverage=1 00:09:21.686 --rc genhtml_legend=1 00:09:21.686 --rc geninfo_all_blocks=1 00:09:21.686 --rc geninfo_unexecuted_blocks=1 00:09:21.686 00:09:21.686 ' 00:09:21.686 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:21.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.686 --rc genhtml_branch_coverage=1 00:09:21.686 --rc genhtml_function_coverage=1 00:09:21.686 --rc genhtml_legend=1 00:09:21.686 --rc geninfo_all_blocks=1 00:09:21.686 --rc geninfo_unexecuted_blocks=1 00:09:21.686 00:09:21.686 ' 00:09:21.686 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:21.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.686 --rc genhtml_branch_coverage=1 00:09:21.686 --rc genhtml_function_coverage=1 00:09:21.686 --rc genhtml_legend=1 00:09:21.686 --rc geninfo_all_blocks=1 00:09:21.686 --rc geninfo_unexecuted_blocks=1 00:09:21.686 00:09:21.686 ' 00:09:21.686 08:06:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:21.686 08:06:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1756076 00:09:21.686 08:06:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:21.686 08:06:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1756076 00:09:21.686 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1756076 ']' 00:09:21.686 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.686 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.686 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.686 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.686 08:06:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:21.686 [2024-11-20 08:06:26.357962] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:21.686 [2024-11-20 08:06:26.358019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756076 ] 00:09:21.947 [2024-11-20 08:06:26.436488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.947 [2024-11-20 08:06:26.472581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.517 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.517 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:22.517 08:06:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:22.517 08:06:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:22.517 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.517 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:22.517 { 00:09:22.517 "filename": "/tmp/spdk_mem_dump.txt" 00:09:22.517 } 00:09:22.517 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.517 08:06:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:22.517 DPDK memory size 810.000000 MiB in 1 heap(s) 00:09:22.517 1 heaps totaling size 810.000000 MiB 00:09:22.517 size: 810.000000 MiB heap id: 0 00:09:22.517 end heaps---------- 00:09:22.517 9 mempools totaling size 595.772034 MiB 00:09:22.517 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:22.517 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:22.517 size: 92.545471 MiB name: bdev_io_1756076 00:09:22.517 size: 50.003479 MiB name: msgpool_1756076 00:09:22.517 size: 36.509338 MiB name: fsdev_io_1756076 00:09:22.517 size: 21.763794 MiB name: PDU_Pool 00:09:22.517 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:22.517 size: 4.133484 MiB name: evtpool_1756076 00:09:22.517 size: 0.026123 MiB name: Session_Pool 00:09:22.517 end mempools------- 00:09:22.517 6 memzones totaling size 4.142822 MiB 00:09:22.517 size: 1.000366 MiB name: RG_ring_0_1756076 00:09:22.517 size: 1.000366 MiB name: RG_ring_1_1756076 00:09:22.517 size: 1.000366 MiB name: RG_ring_4_1756076 00:09:22.517 size: 1.000366 MiB name: RG_ring_5_1756076 00:09:22.517 size: 0.125366 MiB name: RG_ring_2_1756076 00:09:22.517 size: 0.015991 MiB name: RG_ring_3_1756076 00:09:22.517 end memzones------- 00:09:22.517 08:06:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:22.779 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:09:22.779 list of free elements. size: 10.862488 MiB 00:09:22.779 element at address: 0x200018a00000 with size: 0.999878 MiB 00:09:22.779 element at address: 0x200018c00000 with size: 0.999878 MiB 00:09:22.779 element at address: 0x200000400000 with size: 0.998535 MiB 00:09:22.779 element at address: 0x200031800000 with size: 0.994446 MiB 00:09:22.779 element at address: 0x200006400000 with size: 0.959839 MiB 00:09:22.779 element at address: 0x200012c00000 with size: 0.954285 MiB 00:09:22.779 element at address: 0x200018e00000 with size: 0.936584 MiB 00:09:22.779 element at address: 0x200000200000 with size: 0.717346 MiB 00:09:22.779 element at address: 0x20001a600000 with size: 0.582886 MiB 00:09:22.779 element at address: 0x200000c00000 with size: 0.495422 MiB 00:09:22.779 element at address: 0x20000a600000 with size: 0.490723 MiB 00:09:22.779 element at address: 0x200019000000 with size: 0.485657 MiB 00:09:22.779 element at address: 0x200003e00000 with size: 0.481934 MiB 00:09:22.779 element at address: 0x200027a00000 with size: 0.410034 MiB 00:09:22.779 element at address: 0x200000800000 with size: 0.355042 MiB 00:09:22.779 list of standard malloc elements. size: 199.218628 MiB 00:09:22.779 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:09:22.779 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:09:22.779 element at address: 0x200018afff80 with size: 1.000122 MiB 00:09:22.779 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:09:22.779 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:22.779 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:22.779 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:09:22.779 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:22.779 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:09:22.779 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:22.779 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:22.779 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:09:22.779 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:09:22.779 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:09:22.779 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:09:22.779 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:09:22.779 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:09:22.779 element at address: 0x20000085b040 with size: 0.000183 MiB 00:09:22.779 element at address: 0x20000085f300 with size: 0.000183 MiB 00:09:22.779 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:09:22.779 element at address: 0x20000087f680 with size: 0.000183 MiB 00:09:22.779 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:09:22.779 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:09:22.779 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:09:22.779 element at address: 0x200000cff000 with size: 0.000183 MiB 00:09:22.779 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:09:22.779 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:09:22.779 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:09:22.779 element at address: 0x200003efb980 with size: 0.000183 MiB 00:09:22.779 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:09:22.779 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:09:22.779 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:09:22.779 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:09:22.779 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:09:22.779 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:09:22.779 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:09:22.779 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:09:22.779 element at address: 0x20001a695380 with size: 0.000183 MiB 00:09:22.779 element at address: 0x20001a695440 with size: 0.000183 MiB 00:09:22.779 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:09:22.779 element at address: 0x200027a69040 with size: 0.000183 MiB 00:09:22.779 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:09:22.779 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:09:22.779 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:09:22.779 list of memzone associated elements. size: 599.918884 MiB 00:09:22.779 element at address: 0x20001a695500 with size: 211.416748 MiB 00:09:22.779 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:22.779 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:09:22.779 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:22.779 element at address: 0x200012df4780 with size: 92.045044 MiB 00:09:22.779 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1756076_0 00:09:22.779 element at address: 0x200000dff380 with size: 48.003052 MiB 00:09:22.779 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1756076_0 00:09:22.779 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:09:22.779 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1756076_0 00:09:22.779 element at address: 0x2000191be940 with size: 20.255554 MiB 00:09:22.779 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:22.779 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:09:22.779 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:22.779 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:09:22.779 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1756076_0 00:09:22.779 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:09:22.779 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1756076 00:09:22.779 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:22.779 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1756076 00:09:22.779 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:09:22.779 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:22.779 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:09:22.779 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:22.779 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:09:22.779 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:22.779 element at address: 0x200003efba40 with size: 1.008118 MiB 00:09:22.779 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:22.779 element at address: 0x200000cff180 with size: 1.000488 MiB 00:09:22.780 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1756076 00:09:22.780 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:09:22.780 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1756076 00:09:22.780 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:09:22.780 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1756076 00:09:22.780 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:09:22.780 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1756076 00:09:22.780 element at address: 0x20000087f740 with size: 0.500488 MiB 00:09:22.780 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1756076 00:09:22.780 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:09:22.780 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1756076 00:09:22.780 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:09:22.780 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:22.780 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:09:22.780 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:22.780 element at address: 0x20001907c540 with size: 0.250488 MiB 00:09:22.780 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:22.780 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:09:22.780 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1756076 00:09:22.780 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:09:22.780 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1756076 00:09:22.780 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:09:22.780 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:22.780 element at address: 0x200027a69100 with size: 0.023743 MiB 00:09:22.780 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:22.780 element at address: 0x20000085b100 with size: 0.016113 MiB 00:09:22.780 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1756076 00:09:22.780 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:09:22.780 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:22.780 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:09:22.780 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1756076 00:09:22.780 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:09:22.780 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1756076 00:09:22.780 element at address: 0x20000085af00 with size: 0.000305 MiB 00:09:22.780 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1756076 00:09:22.780 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:09:22.780 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:22.780 08:06:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:22.780 08:06:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1756076 00:09:22.780 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1756076 ']' 00:09:22.780 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1756076 00:09:22.780 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:22.780 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.780 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1756076 00:09:22.780 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.780 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.780 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1756076' 00:09:22.780 killing process with pid 1756076 00:09:22.780 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1756076 00:09:22.780 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1756076 00:09:23.042 00:09:23.042 real 0m1.435s 00:09:23.042 user 0m1.536s 00:09:23.042 sys 0m0.407s 00:09:23.042 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.042 08:06:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:23.042 ************************************ 00:09:23.042 END TEST dpdk_mem_utility 00:09:23.042 ************************************ 00:09:23.042 08:06:27 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:23.042 08:06:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:23.042 08:06:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.042 08:06:27 -- common/autotest_common.sh@10 -- # set +x 00:09:23.042 ************************************ 00:09:23.042 START TEST event 00:09:23.042 ************************************ 00:09:23.042 08:06:27 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:23.042 * Looking for test storage... 00:09:23.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:23.042 08:06:27 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:23.042 08:06:27 event -- common/autotest_common.sh@1693 -- # lcov --version 00:09:23.042 08:06:27 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:23.302 08:06:27 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:23.302 08:06:27 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.302 08:06:27 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.302 08:06:27 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.302 08:06:27 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.302 08:06:27 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.302 08:06:27 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.302 08:06:27 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.302 08:06:27 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.302 08:06:27 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.302 08:06:27 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.302 08:06:27 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.302 08:06:27 event -- scripts/common.sh@344 -- # case "$op" in 00:09:23.302 08:06:27 event -- scripts/common.sh@345 -- # : 1 00:09:23.302 08:06:27 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.302 08:06:27 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.302 08:06:27 event -- scripts/common.sh@365 -- # decimal 1 00:09:23.302 08:06:27 event -- scripts/common.sh@353 -- # local d=1 00:09:23.302 08:06:27 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.302 08:06:27 event -- scripts/common.sh@355 -- # echo 1 00:09:23.302 08:06:27 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.302 08:06:27 event -- scripts/common.sh@366 -- # decimal 2 00:09:23.302 08:06:27 event -- scripts/common.sh@353 -- # local d=2 00:09:23.302 08:06:27 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.302 08:06:27 event -- scripts/common.sh@355 -- # echo 2 00:09:23.302 08:06:27 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.302 08:06:27 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.302 08:06:27 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.302 08:06:27 event -- scripts/common.sh@368 -- # return 0 00:09:23.302 08:06:27 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.302 08:06:27 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:23.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.302 --rc genhtml_branch_coverage=1 00:09:23.302 --rc genhtml_function_coverage=1 00:09:23.302 --rc genhtml_legend=1 00:09:23.302 --rc geninfo_all_blocks=1 00:09:23.302 --rc geninfo_unexecuted_blocks=1 00:09:23.302 00:09:23.302 ' 00:09:23.302 08:06:27 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:23.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.302 --rc genhtml_branch_coverage=1 00:09:23.302 --rc genhtml_function_coverage=1 00:09:23.302 --rc genhtml_legend=1 00:09:23.302 --rc geninfo_all_blocks=1 00:09:23.302 --rc geninfo_unexecuted_blocks=1 00:09:23.302 00:09:23.302 ' 00:09:23.302 08:06:27 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:23.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.302 --rc genhtml_branch_coverage=1 00:09:23.302 --rc genhtml_function_coverage=1 00:09:23.302 --rc genhtml_legend=1 00:09:23.302 --rc geninfo_all_blocks=1 00:09:23.302 --rc geninfo_unexecuted_blocks=1 00:09:23.302 00:09:23.302 ' 00:09:23.302 08:06:27 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:23.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.302 --rc genhtml_branch_coverage=1 00:09:23.303 --rc genhtml_function_coverage=1 00:09:23.303 --rc genhtml_legend=1 00:09:23.303 --rc geninfo_all_blocks=1 00:09:23.303 --rc geninfo_unexecuted_blocks=1 00:09:23.303 00:09:23.303 ' 00:09:23.303 08:06:27 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:23.303 08:06:27 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:23.303 08:06:27 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:23.303 08:06:27 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:23.303 08:06:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.303 08:06:27 event -- common/autotest_common.sh@10 -- # set +x 00:09:23.303 ************************************ 00:09:23.303 START TEST event_perf 00:09:23.303 ************************************ 00:09:23.303 08:06:27 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:23.303 Running I/O for 1 seconds...[2024-11-20 08:06:27.881533] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:23.303 [2024-11-20 08:06:27.881632] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756479 ] 00:09:23.303 [2024-11-20 08:06:27.968188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.303 [2024-11-20 08:06:28.013218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.303 [2024-11-20 08:06:28.013335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.303 [2024-11-20 08:06:28.013491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.303 Running I/O for 1 seconds...[2024-11-20 08:06:28.013491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.685 00:09:24.685 lcore 0: 180004 00:09:24.685 lcore 1: 180002 00:09:24.685 lcore 2: 180002 00:09:24.685 lcore 3: 180005 00:09:24.685 done. 00:09:24.685 00:09:24.685 real 0m1.188s 00:09:24.685 user 0m4.103s 00:09:24.685 sys 0m0.083s 00:09:24.685 08:06:29 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.685 08:06:29 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:24.685 ************************************ 00:09:24.685 END TEST event_perf 00:09:24.685 ************************************ 00:09:24.685 08:06:29 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:24.685 08:06:29 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:24.685 08:06:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.685 08:06:29 event -- common/autotest_common.sh@10 -- # set +x 00:09:24.685 ************************************ 00:09:24.685 START TEST event_reactor 00:09:24.685 ************************************ 00:09:24.685 08:06:29 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:24.685 [2024-11-20 08:06:29.111781] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:24.685 [2024-11-20 08:06:29.111813] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756651 ] 00:09:24.685 [2024-11-20 08:06:29.179321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.685 [2024-11-20 08:06:29.213908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.625 test_start 00:09:25.625 oneshot 00:09:25.625 tick 100 00:09:25.625 tick 100 00:09:25.625 tick 250 00:09:25.625 tick 100 00:09:25.625 tick 100 00:09:25.625 tick 100 00:09:25.625 tick 250 00:09:25.625 tick 500 00:09:25.625 tick 100 00:09:25.625 tick 100 00:09:25.625 tick 250 00:09:25.625 tick 100 00:09:25.625 tick 100 00:09:25.625 test_end 00:09:25.625 00:09:25.625 real 0m1.141s 00:09:25.625 user 0m1.079s 00:09:25.625 sys 0m0.059s 00:09:25.625 08:06:30 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.625 08:06:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:25.625 ************************************ 00:09:25.625 END TEST event_reactor 00:09:25.625 ************************************ 00:09:25.625 08:06:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:25.625 08:06:30 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:25.625 08:06:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.625 08:06:30 event -- common/autotest_common.sh@10 -- # set +x 00:09:25.625 ************************************ 00:09:25.625 START TEST event_reactor_perf 00:09:25.625 ************************************ 00:09:25.625 08:06:30 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:25.625 [2024-11-20 08:06:30.326783] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:25.625 [2024-11-20 08:06:30.326827] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756867 ] 00:09:25.885 [2024-11-20 08:06:30.401889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.885 [2024-11-20 08:06:30.436880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.827 test_start 00:09:26.827 test_end 00:09:26.827 Performance: 368090 events per second 00:09:26.827 00:09:26.827 real 0m1.151s 00:09:26.827 user 0m1.086s 00:09:26.827 sys 0m0.061s 00:09:26.827 08:06:31 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.827 08:06:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:26.827 ************************************ 00:09:26.827 END TEST event_reactor_perf 00:09:26.827 ************************************ 00:09:26.827 08:06:31 event -- event/event.sh@49 -- # uname -s 00:09:26.827 08:06:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:26.827 08:06:31 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:26.827 08:06:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.827 08:06:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.827 08:06:31 event -- common/autotest_common.sh@10 -- # set +x 00:09:26.827 ************************************ 00:09:26.827 START TEST event_scheduler 00:09:26.827 ************************************ 00:09:26.827 08:06:31 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:27.088 * Looking for test storage... 00:09:27.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:09:27.088 08:06:31 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:27.088 08:06:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:09:27.088 08:06:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:27.088 08:06:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:27.088 08:06:31 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.088 08:06:31 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.088 08:06:31 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.088 08:06:31 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.088 08:06:31 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.088 08:06:31 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.088 08:06:31 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.088 08:06:31 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.088 08:06:31 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.088 08:06:31 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.089 08:06:31 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:27.089 08:06:31 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.089 08:06:31 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:27.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.089 --rc genhtml_branch_coverage=1 00:09:27.089 --rc genhtml_function_coverage=1 00:09:27.089 --rc genhtml_legend=1 00:09:27.089 --rc geninfo_all_blocks=1 00:09:27.089 --rc geninfo_unexecuted_blocks=1 00:09:27.089 00:09:27.089 ' 00:09:27.089 08:06:31 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:27.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.089 --rc genhtml_branch_coverage=1 00:09:27.089 --rc genhtml_function_coverage=1 00:09:27.089 --rc genhtml_legend=1 00:09:27.089 --rc geninfo_all_blocks=1 00:09:27.089 --rc geninfo_unexecuted_blocks=1 00:09:27.089 00:09:27.089 ' 00:09:27.089 08:06:31 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:27.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.089 --rc genhtml_branch_coverage=1 00:09:27.089 --rc genhtml_function_coverage=1 00:09:27.089 --rc genhtml_legend=1 00:09:27.089 --rc geninfo_all_blocks=1 00:09:27.089 --rc geninfo_unexecuted_blocks=1 00:09:27.089 00:09:27.089 ' 00:09:27.089 08:06:31 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:27.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.089 --rc genhtml_branch_coverage=1 00:09:27.089 --rc genhtml_function_coverage=1 00:09:27.089 --rc genhtml_legend=1 00:09:27.089 --rc geninfo_all_blocks=1 00:09:27.089 --rc geninfo_unexecuted_blocks=1 00:09:27.089 00:09:27.089 ' 00:09:27.089 08:06:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:27.089 08:06:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1757252 00:09:27.089 08:06:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:27.089 08:06:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1757252 00:09:27.089 08:06:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:27.089 08:06:31 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1757252 ']' 00:09:27.089 08:06:31 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.089 08:06:31 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.089 08:06:31 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.089 08:06:31 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.089 08:06:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:27.089 [2024-11-20 08:06:31.795460] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:27.089 [2024-11-20 08:06:31.795536] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1757252 ] 00:09:27.352 [2024-11-20 08:06:31.864581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.352 [2024-11-20 08:06:31.903911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.352 [2024-11-20 08:06:31.904066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.352 [2024-11-20 08:06:31.904190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.352 [2024-11-20 08:06:31.904191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.352 08:06:31 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.352 08:06:31 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:27.352 08:06:31 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:27.352 08:06:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.352 08:06:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:27.352 [2024-11-20 08:06:31.952649] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:09:27.352 [2024-11-20 08:06:31.952662] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:27.352 [2024-11-20 08:06:31.952669] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:27.352 [2024-11-20 08:06:31.952673] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:27.352 [2024-11-20 08:06:31.952677] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:27.352 08:06:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.352 08:06:31 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:27.352 08:06:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.352 08:06:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:27.352 [2024-11-20 08:06:32.008923] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:27.352 08:06:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.352 08:06:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:27.352 08:06:32 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.352 08:06:32 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.352 08:06:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:27.352 ************************************ 00:09:27.352 START TEST scheduler_create_thread 00:09:27.352 ************************************ 00:09:27.352 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:27.352 08:06:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:27.352 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.352 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:27.352 2 00:09:27.352 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.352 08:06:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:27.352 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.352 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:27.352 3 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:27.614 4 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:27.614 5 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:27.614 6 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:27.614 7 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:27.614 8 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:27.614 9 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.614 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:27.875 10 00:09:27.875 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.875 08:06:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:27.875 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.875 08:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:29.260 08:06:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.260 08:06:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:29.260 08:06:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:29.260 08:06:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.260 08:06:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:30.255 08:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.255 08:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:30.255 08:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.255 08:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:30.854 08:06:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.854 08:06:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:30.854 08:06:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:30.854 08:06:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.854 08:06:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:31.795 08:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.795 00:09:31.795 real 0m4.226s 00:09:31.795 user 0m0.022s 00:09:31.795 sys 0m0.010s 00:09:31.795 08:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.795 08:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:31.795 ************************************ 00:09:31.795 END TEST scheduler_create_thread 00:09:31.795 ************************************ 00:09:31.795 08:06:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:31.795 08:06:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1757252 00:09:31.795 08:06:36 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1757252 ']' 00:09:31.795 08:06:36 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1757252 00:09:31.795 08:06:36 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:31.795 08:06:36 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.795 08:06:36 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1757252 00:09:31.795 08:06:36 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:31.795 08:06:36 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:31.795 08:06:36 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1757252' 00:09:31.795 killing process with pid 1757252 00:09:31.795 08:06:36 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1757252 00:09:31.795 08:06:36 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1757252 00:09:32.056 [2024-11-20 08:06:36.554115] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:32.056 00:09:32.056 real 0m5.164s 00:09:32.056 user 0m10.239s 00:09:32.056 sys 0m0.376s 00:09:32.056 08:06:36 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.056 08:06:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:32.056 ************************************ 00:09:32.056 END TEST event_scheduler 00:09:32.056 ************************************ 00:09:32.056 08:06:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:32.056 08:06:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:32.056 08:06:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.056 08:06:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.056 08:06:36 event -- common/autotest_common.sh@10 -- # set +x 00:09:32.317 ************************************ 00:09:32.317 START TEST app_repeat 00:09:32.317 ************************************ 00:09:32.317 08:06:36 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:32.317 08:06:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.317 08:06:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.317 08:06:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:32.317 08:06:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:32.317 08:06:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:32.317 08:06:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:32.317 08:06:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:32.317 08:06:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1758312 00:09:32.317 08:06:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:32.317 08:06:36 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:32.317 08:06:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1758312' 00:09:32.317 Process app_repeat pid: 1758312 00:09:32.317 08:06:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:32.317 08:06:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:32.317 spdk_app_start Round 0 00:09:32.317 08:06:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1758312 /var/tmp/spdk-nbd.sock 00:09:32.317 08:06:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1758312 ']' 00:09:32.317 08:06:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:32.317 08:06:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.317 08:06:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:32.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:32.317 08:06:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.317 08:06:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:32.317 [2024-11-20 08:06:36.820971] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:32.317 [2024-11-20 08:06:36.821034] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1758312 ] 00:09:32.317 [2024-11-20 08:06:36.901058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:32.317 [2024-11-20 08:06:36.937154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.317 [2024-11-20 08:06:36.937156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.317 08:06:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.317 08:06:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:32.317 08:06:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:32.577 Malloc0 00:09:32.577 08:06:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:32.838 Malloc1 00:09:32.838 08:06:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:32.838 08:06:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.838 08:06:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:32.838 08:06:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:32.838 08:06:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.838 08:06:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:32.838 08:06:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:32.838 08:06:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.838 08:06:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:32.838 08:06:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:32.838 08:06:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.838 08:06:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:32.838 08:06:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:32.838 08:06:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:32.838 08:06:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:32.838 08:06:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:32.838 /dev/nbd0 00:09:33.098 08:06:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:33.098 08:06:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:33.098 08:06:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:33.098 08:06:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:33.098 08:06:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:33.098 08:06:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:33.098 08:06:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:33.098 08:06:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:33.098 08:06:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:33.099 1+0 records in 00:09:33.099 1+0 records out 00:09:33.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165228 s, 24.8 MB/s 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:33.099 08:06:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:33.099 08:06:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:33.099 08:06:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:33.099 /dev/nbd1 00:09:33.099 08:06:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:33.099 08:06:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:33.099 1+0 records in 00:09:33.099 1+0 records out 00:09:33.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212022 s, 19.3 MB/s 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:33.099 08:06:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:33.099 08:06:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:33.099 08:06:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:33.099 08:06:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.099 08:06:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.099 08:06:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:33.360 08:06:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:33.360 { 00:09:33.360 "nbd_device": "/dev/nbd0", 00:09:33.360 "bdev_name": "Malloc0" 00:09:33.360 }, 00:09:33.360 { 00:09:33.360 "nbd_device": "/dev/nbd1", 00:09:33.360 "bdev_name": "Malloc1" 00:09:33.360 } 00:09:33.360 ]' 00:09:33.360 08:06:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:33.360 { 00:09:33.360 "nbd_device": "/dev/nbd0", 00:09:33.360 "bdev_name": "Malloc0" 00:09:33.360 }, 00:09:33.360 { 00:09:33.360 "nbd_device": "/dev/nbd1", 00:09:33.360 "bdev_name": "Malloc1" 00:09:33.360 } 00:09:33.360 ]' 00:09:33.360 08:06:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:33.360 /dev/nbd1' 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:33.360 /dev/nbd1' 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:33.360 256+0 records in 00:09:33.360 256+0 records out 00:09:33.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127415 s, 82.3 MB/s 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:33.360 08:06:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:33.360 256+0 records in 00:09:33.360 256+0 records out 00:09:33.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166145 s, 63.1 MB/s 00:09:33.361 08:06:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:33.361 08:06:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:33.361 256+0 records in 00:09:33.361 256+0 records out 00:09:33.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189732 s, 55.3 MB/s 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.622 08:06:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:33.884 08:06:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:33.884 08:06:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:33.884 08:06:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:33.884 08:06:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.884 08:06:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.884 08:06:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:33.884 08:06:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:33.884 08:06:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.884 08:06:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.884 08:06:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.884 08:06:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:34.143 08:06:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:34.143 08:06:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:34.143 08:06:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:34.143 08:06:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:34.143 08:06:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:34.143 08:06:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:34.143 08:06:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:34.143 08:06:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:34.143 08:06:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:34.143 08:06:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:34.143 08:06:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:34.143 08:06:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:34.143 08:06:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:34.403 08:06:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:34.403 [2024-11-20 08:06:38.990844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:34.403 [2024-11-20 08:06:39.028119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.403 [2024-11-20 08:06:39.028272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.403 [2024-11-20 08:06:39.060109] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:34.403 [2024-11-20 08:06:39.060148] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:37.702 08:06:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:37.702 08:06:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:37.702 spdk_app_start Round 1 00:09:37.702 08:06:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1758312 /var/tmp/spdk-nbd.sock 00:09:37.702 08:06:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1758312 ']' 00:09:37.702 08:06:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:37.702 08:06:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.702 08:06:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:37.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:37.702 08:06:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.702 08:06:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:37.702 08:06:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.702 08:06:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:37.702 08:06:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:37.702 Malloc0 00:09:37.702 08:06:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:37.702 Malloc1 00:09:37.702 08:06:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:37.702 08:06:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.702 08:06:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:37.702 08:06:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:37.702 08:06:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:37.702 08:06:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:37.702 08:06:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:37.702 08:06:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.702 08:06:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:37.702 08:06:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:37.702 08:06:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:37.702 08:06:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:37.702 08:06:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:37.702 08:06:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:37.702 08:06:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:37.702 08:06:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:37.963 /dev/nbd0 00:09:37.963 08:06:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:37.963 08:06:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:37.963 08:06:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:37.963 08:06:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:37.963 08:06:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:37.963 08:06:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:37.963 08:06:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:37.963 08:06:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:37.963 08:06:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:37.963 08:06:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:37.963 08:06:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:37.963 1+0 records in 00:09:37.963 1+0 records out 00:09:37.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282387 s, 14.5 MB/s 00:09:37.963 08:06:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:37.963 08:06:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:37.963 08:06:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:37.963 08:06:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:37.963 08:06:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:37.963 08:06:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:37.963 08:06:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:37.963 08:06:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:38.224 /dev/nbd1 00:09:38.224 08:06:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:38.224 08:06:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:38.224 08:06:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:38.224 08:06:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:38.224 08:06:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:38.224 08:06:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:38.224 08:06:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:38.224 08:06:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:38.224 08:06:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:38.224 08:06:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:38.224 08:06:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:38.224 1+0 records in 00:09:38.224 1+0 records out 00:09:38.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267185 s, 15.3 MB/s 00:09:38.224 08:06:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:38.224 08:06:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:38.224 08:06:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:38.224 08:06:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:38.224 08:06:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:38.224 08:06:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:38.224 08:06:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:38.224 08:06:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:38.224 08:06:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.224 08:06:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:38.486 { 00:09:38.486 "nbd_device": "/dev/nbd0", 00:09:38.486 "bdev_name": "Malloc0" 00:09:38.486 }, 00:09:38.486 { 00:09:38.486 "nbd_device": "/dev/nbd1", 00:09:38.486 "bdev_name": "Malloc1" 00:09:38.486 } 00:09:38.486 ]' 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:38.486 { 00:09:38.486 "nbd_device": "/dev/nbd0", 00:09:38.486 "bdev_name": "Malloc0" 00:09:38.486 }, 00:09:38.486 { 00:09:38.486 "nbd_device": "/dev/nbd1", 00:09:38.486 "bdev_name": "Malloc1" 00:09:38.486 } 00:09:38.486 ]' 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:38.486 /dev/nbd1' 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:38.486 /dev/nbd1' 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:38.486 256+0 records in 00:09:38.486 256+0 records out 00:09:38.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120506 s, 87.0 MB/s 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:38.486 256+0 records in 00:09:38.486 256+0 records out 00:09:38.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164245 s, 63.8 MB/s 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:38.486 256+0 records in 00:09:38.486 256+0 records out 00:09:38.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264076 s, 39.7 MB/s 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.486 08:06:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:38.748 08:06:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:38.748 08:06:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:38.748 08:06:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:38.748 08:06:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.748 08:06:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.748 08:06:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:38.748 08:06:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:38.748 08:06:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.748 08:06:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.748 08:06:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:39.010 08:06:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:39.271 08:06:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:39.271 08:06:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:39.271 08:06:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:39.271 08:06:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:39.271 08:06:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:39.271 08:06:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:39.271 08:06:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:39.271 08:06:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:39.531 [2024-11-20 08:06:44.032954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:39.531 [2024-11-20 08:06:44.070018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.531 [2024-11-20 08:06:44.070020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.531 [2024-11-20 08:06:44.102457] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:39.531 [2024-11-20 08:06:44.102495] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:42.833 08:06:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:42.833 08:06:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:42.833 spdk_app_start Round 2 00:09:42.833 08:06:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1758312 /var/tmp/spdk-nbd.sock 00:09:42.833 08:06:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1758312 ']' 00:09:42.833 08:06:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:42.833 08:06:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.833 08:06:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:42.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:42.833 08:06:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.833 08:06:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:42.833 08:06:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.833 08:06:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:42.833 08:06:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:42.833 Malloc0 00:09:42.833 08:06:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:42.833 Malloc1 00:09:42.833 08:06:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:42.833 08:06:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.833 08:06:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:42.833 08:06:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:42.833 08:06:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.833 08:06:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:42.833 08:06:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:42.833 08:06:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.833 08:06:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:42.833 08:06:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:42.833 08:06:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.833 08:06:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:42.833 08:06:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:42.833 08:06:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:42.833 08:06:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.833 08:06:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:43.094 /dev/nbd0 00:09:43.094 08:06:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:43.094 08:06:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:43.094 08:06:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:43.094 08:06:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:43.094 08:06:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:43.094 08:06:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:43.094 08:06:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:43.094 08:06:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:43.094 08:06:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:43.094 08:06:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:43.094 08:06:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:43.094 1+0 records in 00:09:43.094 1+0 records out 00:09:43.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244286 s, 16.8 MB/s 00:09:43.094 08:06:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:43.094 08:06:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:43.094 08:06:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:43.094 08:06:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:43.094 08:06:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:43.094 08:06:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:43.094 08:06:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:43.094 08:06:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:43.355 /dev/nbd1 00:09:43.355 08:06:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:43.355 08:06:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:43.355 08:06:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:43.355 08:06:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:43.355 08:06:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:43.355 08:06:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:43.355 08:06:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:43.355 08:06:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:43.355 08:06:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:43.355 08:06:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:43.355 08:06:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:43.355 1+0 records in 00:09:43.355 1+0 records out 00:09:43.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163006 s, 25.1 MB/s 00:09:43.355 08:06:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:43.355 08:06:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:43.355 08:06:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:43.355 08:06:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:43.355 08:06:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:43.355 08:06:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:43.355 08:06:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:43.355 08:06:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:43.355 08:06:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.355 08:06:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:43.616 { 00:09:43.616 "nbd_device": "/dev/nbd0", 00:09:43.616 "bdev_name": "Malloc0" 00:09:43.616 }, 00:09:43.616 { 00:09:43.616 "nbd_device": "/dev/nbd1", 00:09:43.616 "bdev_name": "Malloc1" 00:09:43.616 } 00:09:43.616 ]' 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:43.616 { 00:09:43.616 "nbd_device": "/dev/nbd0", 00:09:43.616 "bdev_name": "Malloc0" 00:09:43.616 }, 00:09:43.616 { 00:09:43.616 "nbd_device": "/dev/nbd1", 00:09:43.616 "bdev_name": "Malloc1" 00:09:43.616 } 00:09:43.616 ]' 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:43.616 /dev/nbd1' 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:43.616 /dev/nbd1' 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:43.616 256+0 records in 00:09:43.616 256+0 records out 00:09:43.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126695 s, 82.8 MB/s 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:43.616 256+0 records in 00:09:43.616 256+0 records out 00:09:43.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184427 s, 56.9 MB/s 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:43.616 256+0 records in 00:09:43.616 256+0 records out 00:09:43.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208633 s, 50.3 MB/s 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:43.616 08:06:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:43.877 08:06:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:43.877 08:06:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:43.877 08:06:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:43.877 08:06:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:43.877 08:06:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.877 08:06:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:43.877 08:06:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:43.877 08:06:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:43.877 08:06:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:43.877 08:06:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:44.137 08:06:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:44.137 08:06:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:44.398 08:06:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:44.666 [2024-11-20 08:06:49.153686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:44.666 [2024-11-20 08:06:49.190796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.666 [2024-11-20 08:06:49.190798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.666 [2024-11-20 08:06:49.222662] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:44.666 [2024-11-20 08:06:49.222698] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:48.010 08:06:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1758312 /var/tmp/spdk-nbd.sock 00:09:48.010 08:06:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1758312 ']' 00:09:48.010 08:06:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:48.010 08:06:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.010 08:06:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:48.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:48.010 08:06:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.010 08:06:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:48.010 08:06:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.010 08:06:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:48.010 08:06:52 event.app_repeat -- event/event.sh@39 -- # killprocess 1758312 00:09:48.010 08:06:52 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1758312 ']' 00:09:48.010 08:06:52 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1758312 00:09:48.010 08:06:52 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:48.010 08:06:52 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.011 08:06:52 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1758312 00:09:48.011 08:06:52 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.011 08:06:52 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.011 08:06:52 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1758312' 00:09:48.011 killing process with pid 1758312 00:09:48.011 08:06:52 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1758312 00:09:48.011 08:06:52 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1758312 00:09:48.011 spdk_app_start is called in Round 0. 00:09:48.011 Shutdown signal received, stop current app iteration 00:09:48.011 Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 reinitialization... 00:09:48.011 spdk_app_start is called in Round 1. 00:09:48.011 Shutdown signal received, stop current app iteration 00:09:48.011 Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 reinitialization... 00:09:48.011 spdk_app_start is called in Round 2. 00:09:48.011 Shutdown signal received, stop current app iteration 00:09:48.011 Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 reinitialization... 00:09:48.011 spdk_app_start is called in Round 3. 00:09:48.011 Shutdown signal received, stop current app iteration 00:09:48.011 08:06:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:48.011 08:06:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:48.011 00:09:48.011 real 0m15.587s 00:09:48.011 user 0m33.988s 00:09:48.011 sys 0m2.220s 00:09:48.011 08:06:52 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.011 08:06:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:48.011 ************************************ 00:09:48.011 END TEST app_repeat 00:09:48.011 ************************************ 00:09:48.011 08:06:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:48.011 08:06:52 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:48.011 08:06:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.011 08:06:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.011 08:06:52 event -- common/autotest_common.sh@10 -- # set +x 00:09:48.011 ************************************ 00:09:48.011 START TEST cpu_locks 00:09:48.011 ************************************ 00:09:48.011 08:06:52 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:48.011 * Looking for test storage... 00:09:48.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:48.011 08:06:52 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:48.011 08:06:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:09:48.011 08:06:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:48.011 08:06:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.011 08:06:52 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:48.011 08:06:52 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.011 08:06:52 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:48.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.011 --rc genhtml_branch_coverage=1 00:09:48.011 --rc genhtml_function_coverage=1 00:09:48.011 --rc genhtml_legend=1 00:09:48.011 --rc geninfo_all_blocks=1 00:09:48.011 --rc geninfo_unexecuted_blocks=1 00:09:48.011 00:09:48.011 ' 00:09:48.011 08:06:52 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:48.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.011 --rc genhtml_branch_coverage=1 00:09:48.011 --rc genhtml_function_coverage=1 00:09:48.011 --rc genhtml_legend=1 00:09:48.011 --rc geninfo_all_blocks=1 00:09:48.011 --rc geninfo_unexecuted_blocks=1 00:09:48.011 00:09:48.011 ' 00:09:48.011 08:06:52 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:48.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.011 --rc genhtml_branch_coverage=1 00:09:48.011 --rc genhtml_function_coverage=1 00:09:48.011 --rc genhtml_legend=1 00:09:48.011 --rc geninfo_all_blocks=1 00:09:48.011 --rc geninfo_unexecuted_blocks=1 00:09:48.011 00:09:48.011 ' 00:09:48.011 08:06:52 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:48.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.011 --rc genhtml_branch_coverage=1 00:09:48.011 --rc genhtml_function_coverage=1 00:09:48.011 --rc genhtml_legend=1 00:09:48.011 --rc geninfo_all_blocks=1 00:09:48.011 --rc geninfo_unexecuted_blocks=1 00:09:48.011 00:09:48.011 ' 00:09:48.011 08:06:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:48.012 08:06:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:48.012 08:06:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:48.012 08:06:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:48.012 08:06:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.012 08:06:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.012 08:06:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:48.012 ************************************ 00:09:48.012 START TEST default_locks 00:09:48.012 ************************************ 00:09:48.012 08:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:48.012 08:06:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1761604 00:09:48.012 08:06:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1761604 00:09:48.012 08:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1761604 ']' 00:09:48.012 08:06:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:48.012 08:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.012 08:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.012 08:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.012 08:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.012 08:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:48.272 [2024-11-20 08:06:52.750651] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:48.272 [2024-11-20 08:06:52.750714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761604 ] 00:09:48.272 [2024-11-20 08:06:52.833600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.272 [2024-11-20 08:06:52.875635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.843 08:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.843 08:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:48.843 08:06:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1761604 00:09:48.843 08:06:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1761604 00:09:48.843 08:06:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:49.413 lslocks: write error 00:09:49.413 08:06:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1761604 00:09:49.413 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1761604 ']' 00:09:49.413 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1761604 00:09:49.413 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:49.413 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.413 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1761604 00:09:49.413 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:49.413 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:49.413 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1761604' 00:09:49.413 killing process with pid 1761604 00:09:49.413 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1761604 00:09:49.413 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1761604 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1761604 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1761604 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1761604 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1761604 ']' 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:49.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1761604) - No such process 00:09:49.674 ERROR: process (pid: 1761604) is no longer running 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:49.674 00:09:49.674 real 0m1.617s 00:09:49.674 user 0m1.725s 00:09:49.674 sys 0m0.578s 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.674 08:06:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:49.674 ************************************ 00:09:49.674 END TEST default_locks 00:09:49.674 ************************************ 00:09:49.674 08:06:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:49.674 08:06:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.674 08:06:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.674 08:06:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:49.674 ************************************ 00:09:49.674 START TEST default_locks_via_rpc 00:09:49.674 ************************************ 00:09:49.674 08:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:49.674 08:06:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1761969 00:09:49.674 08:06:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1761969 00:09:49.674 08:06:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:49.674 08:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1761969 ']' 00:09:49.674 08:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.674 08:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.674 08:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.674 08:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.674 08:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.936 [2024-11-20 08:06:54.439982] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:49.936 [2024-11-20 08:06:54.440039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761969 ] 00:09:49.936 [2024-11-20 08:06:54.520641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.936 [2024-11-20 08:06:54.559798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.506 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.506 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:50.506 08:06:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:50.506 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.506 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.506 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.506 08:06:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:50.506 08:06:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:50.506 08:06:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:50.766 08:06:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:50.766 08:06:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:50.766 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.766 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.766 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.766 08:06:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1761969 00:09:50.766 08:06:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1761969 00:09:50.766 08:06:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:50.766 08:06:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1761969 00:09:50.766 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1761969 ']' 00:09:50.766 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1761969 00:09:50.766 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:50.766 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.766 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1761969 00:09:51.027 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.027 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.027 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1761969' 00:09:51.027 killing process with pid 1761969 00:09:51.027 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1761969 00:09:51.027 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1761969 00:09:51.027 00:09:51.027 real 0m1.367s 00:09:51.027 user 0m1.473s 00:09:51.027 sys 0m0.458s 00:09:51.027 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.027 08:06:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.027 ************************************ 00:09:51.027 END TEST default_locks_via_rpc 00:09:51.027 ************************************ 00:09:51.288 08:06:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:51.288 08:06:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.288 08:06:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.288 08:06:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:51.288 ************************************ 00:09:51.288 START TEST non_locking_app_on_locked_coremask 00:09:51.288 ************************************ 00:09:51.288 08:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:51.288 08:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1762319 00:09:51.288 08:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1762319 /var/tmp/spdk.sock 00:09:51.288 08:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:51.288 08:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1762319 ']' 00:09:51.288 08:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.288 08:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.288 08:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.288 08:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.288 08:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:51.288 [2024-11-20 08:06:55.882871] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:51.288 [2024-11-20 08:06:55.882922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762319 ] 00:09:51.288 [2024-11-20 08:06:55.961431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.288 [2024-11-20 08:06:56.000873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.229 08:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.229 08:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:52.229 08:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1762606 00:09:52.229 08:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1762606 /var/tmp/spdk2.sock 00:09:52.229 08:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1762606 ']' 00:09:52.229 08:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:52.229 08:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:52.229 08:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.229 08:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:52.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:52.229 08:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.229 08:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:52.229 [2024-11-20 08:06:56.724320] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:52.229 [2024-11-20 08:06:56.724376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762606 ] 00:09:52.229 [2024-11-20 08:06:56.846775] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:52.229 [2024-11-20 08:06:56.846806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.229 [2024-11-20 08:06:56.919075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.799 08:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.799 08:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:52.799 08:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1762319 00:09:52.799 08:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:52.799 08:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1762319 00:09:53.370 lslocks: write error 00:09:53.370 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1762319 00:09:53.370 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1762319 ']' 00:09:53.370 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1762319 00:09:53.370 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:53.370 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.370 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1762319 00:09:53.630 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.630 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.630 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1762319' 00:09:53.630 killing process with pid 1762319 00:09:53.630 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1762319 00:09:53.630 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1762319 00:09:53.891 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1762606 00:09:53.891 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1762606 ']' 00:09:53.891 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1762606 00:09:53.891 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:53.891 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.891 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1762606 00:09:53.891 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.891 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.891 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1762606' 00:09:53.891 killing process with pid 1762606 00:09:53.891 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1762606 00:09:53.891 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1762606 00:09:54.152 00:09:54.152 real 0m2.952s 00:09:54.152 user 0m3.249s 00:09:54.152 sys 0m0.918s 00:09:54.152 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.152 08:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:54.152 ************************************ 00:09:54.152 END TEST non_locking_app_on_locked_coremask 00:09:54.152 ************************************ 00:09:54.152 08:06:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:54.152 08:06:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.152 08:06:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.152 08:06:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:54.152 ************************************ 00:09:54.152 START TEST locking_app_on_unlocked_coremask 00:09:54.152 ************************************ 00:09:54.152 08:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:54.152 08:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1763025 00:09:54.152 08:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1763025 /var/tmp/spdk.sock 00:09:54.152 08:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:54.152 08:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1763025 ']' 00:09:54.152 08:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.152 08:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.152 08:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.152 08:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.153 08:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:54.413 [2024-11-20 08:06:58.912070] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:54.413 [2024-11-20 08:06:58.912118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763025 ] 00:09:54.413 [2024-11-20 08:06:58.991473] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:54.413 [2024-11-20 08:06:58.991502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.413 [2024-11-20 08:06:59.026494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.673 08:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.673 08:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:54.673 08:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1763032 00:09:54.673 08:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1763032 /var/tmp/spdk2.sock 00:09:54.673 08:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1763032 ']' 00:09:54.673 08:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:54.673 08:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:54.673 08:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.673 08:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:54.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:54.673 08:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.673 08:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:54.673 [2024-11-20 08:06:59.268640] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:54.673 [2024-11-20 08:06:59.268689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763032 ] 00:09:54.673 [2024-11-20 08:06:59.392999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.934 [2024-11-20 08:06:59.466631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.506 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.506 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:55.506 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1763032 00:09:55.506 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1763032 00:09:55.506 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:56.077 lslocks: write error 00:09:56.077 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1763025 00:09:56.077 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1763025 ']' 00:09:56.077 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1763025 00:09:56.077 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:56.077 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.077 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1763025 00:09:56.077 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.077 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.077 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1763025' 00:09:56.077 killing process with pid 1763025 00:09:56.077 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1763025 00:09:56.077 08:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1763025 00:09:56.340 08:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1763032 00:09:56.340 08:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1763032 ']' 00:09:56.340 08:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1763032 00:09:56.340 08:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:56.340 08:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.340 08:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1763032 00:09:56.601 08:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.601 08:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.601 08:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1763032' 00:09:56.601 killing process with pid 1763032 00:09:56.601 08:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1763032 00:09:56.601 08:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1763032 00:09:56.601 00:09:56.601 real 0m2.471s 00:09:56.601 user 0m2.726s 00:09:56.601 sys 0m0.870s 00:09:56.601 08:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.601 08:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:56.602 ************************************ 00:09:56.602 END TEST locking_app_on_unlocked_coremask 00:09:56.602 ************************************ 00:09:56.863 08:07:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:56.863 08:07:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.863 08:07:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.863 08:07:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:56.863 ************************************ 00:09:56.863 START TEST locking_app_on_locked_coremask 00:09:56.863 ************************************ 00:09:56.863 08:07:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:56.863 08:07:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1763501 00:09:56.863 08:07:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1763501 /var/tmp/spdk.sock 00:09:56.863 08:07:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:56.863 08:07:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1763501 ']' 00:09:56.863 08:07:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.863 08:07:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.863 08:07:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.863 08:07:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.863 08:07:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:56.863 [2024-11-20 08:07:01.458221] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:56.863 [2024-11-20 08:07:01.458279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763501 ] 00:09:56.863 [2024-11-20 08:07:01.540710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.863 [2024-11-20 08:07:01.582786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1763739 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1763739 /var/tmp/spdk2.sock 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1763739 /var/tmp/spdk2.sock 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1763739 /var/tmp/spdk2.sock 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1763739 ']' 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:57.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.807 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:57.807 [2024-11-20 08:07:02.312813] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:57.807 [2024-11-20 08:07:02.312876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763739 ] 00:09:57.807 [2024-11-20 08:07:02.433272] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1763501 has claimed it. 00:09:57.807 [2024-11-20 08:07:02.433311] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:58.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1763739) - No such process 00:09:58.376 ERROR: process (pid: 1763739) is no longer running 00:09:58.376 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.376 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:58.376 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:58.376 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:58.376 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:58.376 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:58.376 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1763501 00:09:58.376 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1763501 00:09:58.376 08:07:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:58.945 lslocks: write error 00:09:58.945 08:07:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1763501 00:09:58.945 08:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1763501 ']' 00:09:58.945 08:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1763501 00:09:58.945 08:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:58.945 08:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.945 08:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1763501 00:09:58.945 08:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.945 08:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.945 08:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1763501' 00:09:58.945 killing process with pid 1763501 00:09:58.945 08:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1763501 00:09:58.945 08:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1763501 00:09:59.205 00:09:59.205 real 0m2.298s 00:09:59.205 user 0m2.583s 00:09:59.205 sys 0m0.658s 00:09:59.205 08:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.205 08:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:59.205 ************************************ 00:09:59.205 END TEST locking_app_on_locked_coremask 00:09:59.205 ************************************ 00:09:59.205 08:07:03 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:59.205 08:07:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.205 08:07:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.205 08:07:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:59.205 ************************************ 00:09:59.205 START TEST locking_overlapped_coremask 00:09:59.205 ************************************ 00:09:59.205 08:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:59.205 08:07:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1764099 00:09:59.205 08:07:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1764099 /var/tmp/spdk.sock 00:09:59.205 08:07:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:59.205 08:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1764099 ']' 00:09:59.205 08:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.205 08:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.205 08:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.205 08:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.205 08:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:59.205 [2024-11-20 08:07:03.832931] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:09:59.205 [2024-11-20 08:07:03.832985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764099 ] 00:09:59.205 [2024-11-20 08:07:03.913312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:59.464 [2024-11-20 08:07:03.952836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.464 [2024-11-20 08:07:03.952973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.464 [2024-11-20 08:07:03.953064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1764135 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1764135 /var/tmp/spdk2.sock 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1764135 /var/tmp/spdk2.sock 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1764135 /var/tmp/spdk2.sock 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1764135 ']' 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:00.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.034 08:07:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:00.034 [2024-11-20 08:07:04.683134] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:10:00.034 [2024-11-20 08:07:04.683187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764135 ] 00:10:00.294 [2024-11-20 08:07:04.780812] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1764099 has claimed it. 00:10:00.294 [2024-11-20 08:07:04.780847] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:00.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1764135) - No such process 00:10:00.865 ERROR: process (pid: 1764135) is no longer running 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1764099 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1764099 ']' 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1764099 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1764099 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1764099' 00:10:00.865 killing process with pid 1764099 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1764099 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1764099 00:10:00.865 00:10:00.865 real 0m1.799s 00:10:00.865 user 0m5.192s 00:10:00.865 sys 0m0.391s 00:10:00.865 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.866 08:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:00.866 ************************************ 00:10:00.866 END TEST locking_overlapped_coremask 00:10:00.866 ************************************ 00:10:01.127 08:07:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:01.127 08:07:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:01.127 08:07:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.127 08:07:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:01.127 ************************************ 00:10:01.127 START TEST locking_overlapped_coremask_via_rpc 00:10:01.127 ************************************ 00:10:01.127 08:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:01.127 08:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1764478 00:10:01.127 08:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1764478 /var/tmp/spdk.sock 00:10:01.127 08:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1764478 ']' 00:10:01.127 08:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:01.127 08:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.127 08:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.127 08:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.127 08:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.127 08:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.127 [2024-11-20 08:07:05.708362] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:10:01.127 [2024-11-20 08:07:05.708412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764478 ] 00:10:01.127 [2024-11-20 08:07:05.788649] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:01.127 [2024-11-20 08:07:05.788679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:01.127 [2024-11-20 08:07:05.831183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.127 [2024-11-20 08:07:05.831303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.127 [2024-11-20 08:07:05.831305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.071 08:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.071 08:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:02.071 08:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:02.071 08:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1764611 00:10:02.071 08:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1764611 /var/tmp/spdk2.sock 00:10:02.071 08:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1764611 ']' 00:10:02.071 08:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:02.071 08:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.071 08:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:02.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:02.071 08:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.071 08:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.071 [2024-11-20 08:07:06.557108] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:10:02.071 [2024-11-20 08:07:06.557152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764611 ] 00:10:02.071 [2024-11-20 08:07:06.646649] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:02.071 [2024-11-20 08:07:06.646670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:02.071 [2024-11-20 08:07:06.705915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.071 [2024-11-20 08:07:06.709988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.071 [2024-11-20 08:07:06.709990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:02.644 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.644 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:02.644 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:02.644 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.644 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.904 [2024-11-20 08:07:07.386924] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1764478 has claimed it. 00:10:02.904 request: 00:10:02.904 { 00:10:02.904 "method": "framework_enable_cpumask_locks", 00:10:02.904 "req_id": 1 00:10:02.904 } 00:10:02.904 Got JSON-RPC error response 00:10:02.904 response: 00:10:02.904 { 00:10:02.904 "code": -32603, 00:10:02.904 "message": "Failed to claim CPU core: 2" 00:10:02.904 } 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1764478 /var/tmp/spdk.sock 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1764478 ']' 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.904 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.905 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:02.905 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1764611 /var/tmp/spdk2.sock 00:10:02.905 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1764611 ']' 00:10:02.905 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:02.905 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.905 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:02.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:02.905 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.905 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.165 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.165 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:03.165 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:03.165 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:03.165 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:03.165 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:03.165 00:10:03.165 real 0m2.117s 00:10:03.165 user 0m0.895s 00:10:03.165 sys 0m0.143s 00:10:03.165 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.165 08:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.165 ************************************ 00:10:03.165 END TEST locking_overlapped_coremask_via_rpc 00:10:03.165 ************************************ 00:10:03.165 08:07:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:03.165 08:07:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1764478 ]] 00:10:03.165 08:07:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1764478 00:10:03.165 08:07:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1764478 ']' 00:10:03.165 08:07:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1764478 00:10:03.165 08:07:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:03.165 08:07:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.165 08:07:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1764478 00:10:03.165 08:07:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.165 08:07:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.165 08:07:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1764478' 00:10:03.165 killing process with pid 1764478 00:10:03.165 08:07:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1764478 00:10:03.165 08:07:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1764478 00:10:03.426 08:07:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1764611 ]] 00:10:03.426 08:07:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1764611 00:10:03.426 08:07:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1764611 ']' 00:10:03.426 08:07:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1764611 00:10:03.426 08:07:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:03.426 08:07:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.426 08:07:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1764611 00:10:03.426 08:07:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:03.426 08:07:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:03.426 08:07:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1764611' 00:10:03.426 killing process with pid 1764611 00:10:03.426 08:07:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1764611 00:10:03.426 08:07:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1764611 00:10:03.686 08:07:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:03.686 08:07:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:03.686 08:07:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1764478 ]] 00:10:03.686 08:07:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1764478 00:10:03.686 08:07:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1764478 ']' 00:10:03.686 08:07:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1764478 00:10:03.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1764478) - No such process 00:10:03.686 08:07:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1764478 is not found' 00:10:03.686 Process with pid 1764478 is not found 00:10:03.686 08:07:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1764611 ]] 00:10:03.686 08:07:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1764611 00:10:03.686 08:07:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1764611 ']' 00:10:03.686 08:07:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1764611 00:10:03.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1764611) - No such process 00:10:03.686 08:07:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1764611 is not found' 00:10:03.686 Process with pid 1764611 is not found 00:10:03.686 08:07:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:03.686 00:10:03.686 real 0m15.889s 00:10:03.686 user 0m28.158s 00:10:03.686 sys 0m4.950s 00:10:03.686 08:07:08 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.686 08:07:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:03.686 ************************************ 00:10:03.686 END TEST cpu_locks 00:10:03.686 ************************************ 00:10:03.686 00:10:03.686 real 0m40.759s 00:10:03.686 user 1m18.933s 00:10:03.686 sys 0m8.142s 00:10:03.686 08:07:08 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.686 08:07:08 event -- common/autotest_common.sh@10 -- # set +x 00:10:03.686 ************************************ 00:10:03.686 END TEST event 00:10:03.686 ************************************ 00:10:03.948 08:07:08 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:03.948 08:07:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.948 08:07:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.948 08:07:08 -- common/autotest_common.sh@10 -- # set +x 00:10:03.948 ************************************ 00:10:03.948 START TEST thread 00:10:03.948 ************************************ 00:10:03.948 08:07:08 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:03.948 * Looking for test storage... 00:10:03.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:10:03.948 08:07:08 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:03.948 08:07:08 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:10:03.948 08:07:08 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:03.948 08:07:08 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:03.948 08:07:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.948 08:07:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.948 08:07:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.948 08:07:08 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.948 08:07:08 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.948 08:07:08 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.948 08:07:08 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.948 08:07:08 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.948 08:07:08 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.948 08:07:08 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.948 08:07:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.948 08:07:08 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:03.948 08:07:08 thread -- scripts/common.sh@345 -- # : 1 00:10:03.948 08:07:08 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.948 08:07:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.948 08:07:08 thread -- scripts/common.sh@365 -- # decimal 1 00:10:03.948 08:07:08 thread -- scripts/common.sh@353 -- # local d=1 00:10:03.948 08:07:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.948 08:07:08 thread -- scripts/common.sh@355 -- # echo 1 00:10:03.948 08:07:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.948 08:07:08 thread -- scripts/common.sh@366 -- # decimal 2 00:10:03.948 08:07:08 thread -- scripts/common.sh@353 -- # local d=2 00:10:03.948 08:07:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.948 08:07:08 thread -- scripts/common.sh@355 -- # echo 2 00:10:03.948 08:07:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.948 08:07:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.948 08:07:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.948 08:07:08 thread -- scripts/common.sh@368 -- # return 0 00:10:03.948 08:07:08 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.948 08:07:08 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:03.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.948 --rc genhtml_branch_coverage=1 00:10:03.948 --rc genhtml_function_coverage=1 00:10:03.948 --rc genhtml_legend=1 00:10:03.948 --rc geninfo_all_blocks=1 00:10:03.948 --rc geninfo_unexecuted_blocks=1 00:10:03.948 00:10:03.948 ' 00:10:03.948 08:07:08 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:03.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.948 --rc genhtml_branch_coverage=1 00:10:03.948 --rc genhtml_function_coverage=1 00:10:03.948 --rc genhtml_legend=1 00:10:03.948 --rc geninfo_all_blocks=1 00:10:03.948 --rc geninfo_unexecuted_blocks=1 00:10:03.948 00:10:03.948 ' 00:10:03.948 08:07:08 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:03.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.948 --rc genhtml_branch_coverage=1 00:10:03.948 --rc genhtml_function_coverage=1 00:10:03.948 --rc genhtml_legend=1 00:10:03.948 --rc geninfo_all_blocks=1 00:10:03.948 --rc geninfo_unexecuted_blocks=1 00:10:03.948 00:10:03.948 ' 00:10:03.948 08:07:08 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:03.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.948 --rc genhtml_branch_coverage=1 00:10:03.948 --rc genhtml_function_coverage=1 00:10:03.948 --rc genhtml_legend=1 00:10:03.948 --rc geninfo_all_blocks=1 00:10:03.948 --rc geninfo_unexecuted_blocks=1 00:10:03.948 00:10:03.948 ' 00:10:03.948 08:07:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:03.948 08:07:08 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:03.948 08:07:08 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.948 08:07:08 thread -- common/autotest_common.sh@10 -- # set +x 00:10:04.209 ************************************ 00:10:04.209 START TEST thread_poller_perf 00:10:04.209 ************************************ 00:10:04.209 08:07:08 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:04.209 [2024-11-20 08:07:08.705457] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:10:04.209 [2024-11-20 08:07:08.705568] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765257 ] 00:10:04.209 [2024-11-20 08:07:08.792002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.209 [2024-11-20 08:07:08.828042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.209 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:05.184 [2024-11-20T07:07:09.913Z] ====================================== 00:10:05.184 [2024-11-20T07:07:09.913Z] busy:2407445528 (cyc) 00:10:05.184 [2024-11-20T07:07:09.913Z] total_run_count: 287000 00:10:05.184 [2024-11-20T07:07:09.913Z] tsc_hz: 2400000000 (cyc) 00:10:05.184 [2024-11-20T07:07:09.913Z] ====================================== 00:10:05.184 [2024-11-20T07:07:09.913Z] poller_cost: 8388 (cyc), 3495 (nsec) 00:10:05.184 00:10:05.184 real 0m1.184s 00:10:05.184 user 0m1.109s 00:10:05.184 sys 0m0.071s 00:10:05.184 08:07:09 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.184 08:07:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:05.184 ************************************ 00:10:05.184 END TEST thread_poller_perf 00:10:05.184 ************************************ 00:10:05.517 08:07:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:05.517 08:07:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:05.517 08:07:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.517 08:07:09 thread -- common/autotest_common.sh@10 -- # set +x 00:10:05.517 ************************************ 00:10:05.517 START TEST thread_poller_perf 00:10:05.517 ************************************ 00:10:05.517 08:07:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:05.517 [2024-11-20 08:07:09.958225] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:10:05.517 [2024-11-20 08:07:09.958322] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765412 ] 00:10:05.517 [2024-11-20 08:07:10.053849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.517 [2024-11-20 08:07:10.092034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.517 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:06.457 [2024-11-20T07:07:11.186Z] ====================================== 00:10:06.457 [2024-11-20T07:07:11.186Z] busy:2402092622 (cyc) 00:10:06.457 [2024-11-20T07:07:11.186Z] total_run_count: 3813000 00:10:06.457 [2024-11-20T07:07:11.186Z] tsc_hz: 2400000000 (cyc) 00:10:06.457 [2024-11-20T07:07:11.186Z] ====================================== 00:10:06.457 [2024-11-20T07:07:11.186Z] poller_cost: 629 (cyc), 262 (nsec) 00:10:06.457 00:10:06.457 real 0m1.189s 00:10:06.457 user 0m1.094s 00:10:06.457 sys 0m0.091s 00:10:06.457 08:07:11 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.457 08:07:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:06.457 ************************************ 00:10:06.457 END TEST thread_poller_perf 00:10:06.457 ************************************ 00:10:06.457 08:07:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:06.457 00:10:06.457 real 0m2.704s 00:10:06.457 user 0m2.367s 00:10:06.457 sys 0m0.349s 00:10:06.457 08:07:11 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.457 08:07:11 thread -- common/autotest_common.sh@10 -- # set +x 00:10:06.457 ************************************ 00:10:06.457 END TEST thread 00:10:06.457 ************************************ 00:10:06.718 08:07:11 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:06.718 08:07:11 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:06.718 08:07:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.718 08:07:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.718 08:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:06.718 ************************************ 00:10:06.718 START TEST app_cmdline 00:10:06.718 ************************************ 00:10:06.718 08:07:11 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:06.718 * Looking for test storage... 00:10:06.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:06.718 08:07:11 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.718 08:07:11 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.718 08:07:11 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.718 08:07:11 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.718 08:07:11 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.718 08:07:11 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.718 08:07:11 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.718 08:07:11 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.718 08:07:11 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.718 08:07:11 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.718 08:07:11 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.718 08:07:11 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.718 08:07:11 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.719 08:07:11 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:06.719 08:07:11 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.719 08:07:11 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.719 --rc genhtml_branch_coverage=1 00:10:06.719 --rc genhtml_function_coverage=1 00:10:06.719 --rc genhtml_legend=1 00:10:06.719 --rc geninfo_all_blocks=1 00:10:06.719 --rc geninfo_unexecuted_blocks=1 00:10:06.719 00:10:06.719 ' 00:10:06.719 08:07:11 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.719 --rc genhtml_branch_coverage=1 00:10:06.719 --rc genhtml_function_coverage=1 00:10:06.719 --rc genhtml_legend=1 00:10:06.719 --rc geninfo_all_blocks=1 00:10:06.719 --rc geninfo_unexecuted_blocks=1 00:10:06.719 00:10:06.719 ' 00:10:06.719 08:07:11 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.719 --rc genhtml_branch_coverage=1 00:10:06.719 --rc genhtml_function_coverage=1 00:10:06.719 --rc genhtml_legend=1 00:10:06.719 --rc geninfo_all_blocks=1 00:10:06.719 --rc geninfo_unexecuted_blocks=1 00:10:06.719 00:10:06.719 ' 00:10:06.719 08:07:11 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.719 --rc genhtml_branch_coverage=1 00:10:06.719 --rc genhtml_function_coverage=1 00:10:06.719 --rc genhtml_legend=1 00:10:06.719 --rc geninfo_all_blocks=1 00:10:06.719 --rc geninfo_unexecuted_blocks=1 00:10:06.719 00:10:06.719 ' 00:10:06.719 08:07:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:06.719 08:07:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1765715 00:10:06.719 08:07:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1765715 00:10:06.719 08:07:11 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:06.719 08:07:11 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1765715 ']' 00:10:06.719 08:07:11 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.719 08:07:11 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.719 08:07:11 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.719 08:07:11 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.719 08:07:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:06.979 [2024-11-20 08:07:11.493154] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:10:06.979 [2024-11-20 08:07:11.493227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765715 ] 00:10:06.979 [2024-11-20 08:07:11.575283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.979 [2024-11-20 08:07:11.616921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:07.920 08:07:12 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:07.920 { 00:10:07.920 "version": "SPDK v25.01-pre git sha1 c788bae60", 00:10:07.920 "fields": { 00:10:07.920 "major": 25, 00:10:07.920 "minor": 1, 00:10:07.920 "patch": 0, 00:10:07.920 "suffix": "-pre", 00:10:07.920 "commit": "c788bae60" 00:10:07.920 } 00:10:07.920 } 00:10:07.920 08:07:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:07.920 08:07:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:07.920 08:07:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:07.920 08:07:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:07.920 08:07:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:07.920 08:07:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:07.920 08:07:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.920 08:07:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:07.920 08:07:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:07.920 08:07:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:07.920 08:07:12 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:07.920 request: 00:10:07.920 { 00:10:07.920 "method": "env_dpdk_get_mem_stats", 00:10:07.920 "req_id": 1 00:10:07.920 } 00:10:07.920 Got JSON-RPC error response 00:10:07.920 response: 00:10:07.920 { 00:10:07.920 "code": -32601, 00:10:07.920 "message": "Method not found" 00:10:07.920 } 00:10:08.181 08:07:12 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:08.181 08:07:12 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:08.181 08:07:12 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:08.181 08:07:12 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:08.181 08:07:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1765715 00:10:08.181 08:07:12 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1765715 ']' 00:10:08.181 08:07:12 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1765715 00:10:08.181 08:07:12 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:08.181 08:07:12 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.181 08:07:12 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1765715 00:10:08.181 08:07:12 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.181 08:07:12 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.181 08:07:12 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1765715' 00:10:08.181 killing process with pid 1765715 00:10:08.181 08:07:12 app_cmdline -- common/autotest_common.sh@973 -- # kill 1765715 00:10:08.181 08:07:12 app_cmdline -- common/autotest_common.sh@978 -- # wait 1765715 00:10:08.443 00:10:08.443 real 0m1.698s 00:10:08.443 user 0m2.032s 00:10:08.443 sys 0m0.435s 00:10:08.443 08:07:12 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.443 08:07:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:08.443 ************************************ 00:10:08.443 END TEST app_cmdline 00:10:08.443 ************************************ 00:10:08.443 08:07:12 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:08.443 08:07:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.443 08:07:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.443 08:07:12 -- common/autotest_common.sh@10 -- # set +x 00:10:08.443 ************************************ 00:10:08.443 START TEST version 00:10:08.443 ************************************ 00:10:08.443 08:07:13 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:08.443 * Looking for test storage... 00:10:08.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:08.443 08:07:13 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.443 08:07:13 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.443 08:07:13 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.704 08:07:13 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.704 08:07:13 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.704 08:07:13 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.704 08:07:13 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.704 08:07:13 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.704 08:07:13 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.704 08:07:13 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.704 08:07:13 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.704 08:07:13 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.704 08:07:13 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.704 08:07:13 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.704 08:07:13 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.704 08:07:13 version -- scripts/common.sh@344 -- # case "$op" in 00:10:08.704 08:07:13 version -- scripts/common.sh@345 -- # : 1 00:10:08.704 08:07:13 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.704 08:07:13 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.704 08:07:13 version -- scripts/common.sh@365 -- # decimal 1 00:10:08.704 08:07:13 version -- scripts/common.sh@353 -- # local d=1 00:10:08.704 08:07:13 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.704 08:07:13 version -- scripts/common.sh@355 -- # echo 1 00:10:08.704 08:07:13 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.704 08:07:13 version -- scripts/common.sh@366 -- # decimal 2 00:10:08.704 08:07:13 version -- scripts/common.sh@353 -- # local d=2 00:10:08.704 08:07:13 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.704 08:07:13 version -- scripts/common.sh@355 -- # echo 2 00:10:08.704 08:07:13 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.704 08:07:13 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.704 08:07:13 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.704 08:07:13 version -- scripts/common.sh@368 -- # return 0 00:10:08.704 08:07:13 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.704 08:07:13 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.704 --rc genhtml_branch_coverage=1 00:10:08.704 --rc genhtml_function_coverage=1 00:10:08.704 --rc genhtml_legend=1 00:10:08.704 --rc geninfo_all_blocks=1 00:10:08.704 --rc geninfo_unexecuted_blocks=1 00:10:08.704 00:10:08.704 ' 00:10:08.704 08:07:13 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.704 --rc genhtml_branch_coverage=1 00:10:08.704 --rc genhtml_function_coverage=1 00:10:08.704 --rc genhtml_legend=1 00:10:08.704 --rc geninfo_all_blocks=1 00:10:08.704 --rc geninfo_unexecuted_blocks=1 00:10:08.704 00:10:08.704 ' 00:10:08.704 08:07:13 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.704 --rc genhtml_branch_coverage=1 00:10:08.704 --rc genhtml_function_coverage=1 00:10:08.704 --rc genhtml_legend=1 00:10:08.704 --rc geninfo_all_blocks=1 00:10:08.704 --rc geninfo_unexecuted_blocks=1 00:10:08.704 00:10:08.704 ' 00:10:08.704 08:07:13 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.704 --rc genhtml_branch_coverage=1 00:10:08.704 --rc genhtml_function_coverage=1 00:10:08.704 --rc genhtml_legend=1 00:10:08.704 --rc geninfo_all_blocks=1 00:10:08.704 --rc geninfo_unexecuted_blocks=1 00:10:08.705 00:10:08.705 ' 00:10:08.705 08:07:13 version -- app/version.sh@17 -- # get_header_version major 00:10:08.705 08:07:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:08.705 08:07:13 version -- app/version.sh@14 -- # cut -f2 00:10:08.705 08:07:13 version -- app/version.sh@14 -- # tr -d '"' 00:10:08.705 08:07:13 version -- app/version.sh@17 -- # major=25 00:10:08.705 08:07:13 version -- app/version.sh@18 -- # get_header_version minor 00:10:08.705 08:07:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:08.705 08:07:13 version -- app/version.sh@14 -- # cut -f2 00:10:08.705 08:07:13 version -- app/version.sh@14 -- # tr -d '"' 00:10:08.705 08:07:13 version -- app/version.sh@18 -- # minor=1 00:10:08.705 08:07:13 version -- app/version.sh@19 -- # get_header_version patch 00:10:08.705 08:07:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:08.705 08:07:13 version -- app/version.sh@14 -- # cut -f2 00:10:08.705 08:07:13 version -- app/version.sh@14 -- # tr -d '"' 00:10:08.705 08:07:13 version -- app/version.sh@19 -- # patch=0 00:10:08.705 08:07:13 version -- app/version.sh@20 -- # get_header_version suffix 00:10:08.705 08:07:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:08.705 08:07:13 version -- app/version.sh@14 -- # cut -f2 00:10:08.705 08:07:13 version -- app/version.sh@14 -- # tr -d '"' 00:10:08.705 08:07:13 version -- app/version.sh@20 -- # suffix=-pre 00:10:08.705 08:07:13 version -- app/version.sh@22 -- # version=25.1 00:10:08.705 08:07:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:08.705 08:07:13 version -- app/version.sh@28 -- # version=25.1rc0 00:10:08.705 08:07:13 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:08.705 08:07:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:08.705 08:07:13 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:08.705 08:07:13 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:08.705 00:10:08.705 real 0m0.283s 00:10:08.705 user 0m0.172s 00:10:08.705 sys 0m0.160s 00:10:08.705 08:07:13 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.705 08:07:13 version -- common/autotest_common.sh@10 -- # set +x 00:10:08.705 ************************************ 00:10:08.705 END TEST version 00:10:08.705 ************************************ 00:10:08.705 08:07:13 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:08.705 08:07:13 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:08.705 08:07:13 -- spdk/autotest.sh@194 -- # uname -s 00:10:08.705 08:07:13 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:08.705 08:07:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:08.705 08:07:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:08.705 08:07:13 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:10:08.705 08:07:13 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:10:08.705 08:07:13 -- spdk/autotest.sh@260 -- # timing_exit lib 00:10:08.705 08:07:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:08.705 08:07:13 -- common/autotest_common.sh@10 -- # set +x 00:10:08.705 08:07:13 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:10:08.705 08:07:13 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:10:08.705 08:07:13 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:10:08.705 08:07:13 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:10:08.705 08:07:13 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:10:08.705 08:07:13 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:10:08.705 08:07:13 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:08.705 08:07:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.705 08:07:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.705 08:07:13 -- common/autotest_common.sh@10 -- # set +x 00:10:08.705 ************************************ 00:10:08.705 START TEST nvmf_tcp 00:10:08.705 ************************************ 00:10:08.705 08:07:13 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:08.966 * Looking for test storage... 00:10:08.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:08.966 08:07:13 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.966 08:07:13 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.966 08:07:13 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.966 08:07:13 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.966 08:07:13 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:10:08.966 08:07:13 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.966 08:07:13 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.966 --rc genhtml_branch_coverage=1 00:10:08.966 --rc genhtml_function_coverage=1 00:10:08.966 --rc genhtml_legend=1 00:10:08.966 --rc geninfo_all_blocks=1 00:10:08.966 --rc geninfo_unexecuted_blocks=1 00:10:08.966 00:10:08.966 ' 00:10:08.966 08:07:13 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.966 --rc genhtml_branch_coverage=1 00:10:08.966 --rc genhtml_function_coverage=1 00:10:08.966 --rc genhtml_legend=1 00:10:08.966 --rc geninfo_all_blocks=1 00:10:08.966 --rc geninfo_unexecuted_blocks=1 00:10:08.966 00:10:08.966 ' 00:10:08.966 08:07:13 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.966 --rc genhtml_branch_coverage=1 00:10:08.966 --rc genhtml_function_coverage=1 00:10:08.966 --rc genhtml_legend=1 00:10:08.966 --rc geninfo_all_blocks=1 00:10:08.966 --rc geninfo_unexecuted_blocks=1 00:10:08.966 00:10:08.966 ' 00:10:08.966 08:07:13 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.966 --rc genhtml_branch_coverage=1 00:10:08.966 --rc genhtml_function_coverage=1 00:10:08.966 --rc genhtml_legend=1 00:10:08.966 --rc geninfo_all_blocks=1 00:10:08.966 --rc geninfo_unexecuted_blocks=1 00:10:08.966 00:10:08.966 ' 00:10:08.966 08:07:13 nvmf_tcp -- nvmf/nvmf.sh@10 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:08.966 08:07:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.966 08:07:13 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.966 08:07:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:08.966 ************************************ 00:10:08.966 START TEST nvmf_target_core 00:10:08.966 ************************************ 00:10:08.966 08:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:09.227 * Looking for test storage... 00:10:09.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:09.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.227 --rc genhtml_branch_coverage=1 00:10:09.227 --rc genhtml_function_coverage=1 00:10:09.227 --rc genhtml_legend=1 00:10:09.227 --rc geninfo_all_blocks=1 00:10:09.227 --rc geninfo_unexecuted_blocks=1 00:10:09.227 00:10:09.227 ' 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:09.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.227 --rc genhtml_branch_coverage=1 00:10:09.227 --rc genhtml_function_coverage=1 00:10:09.227 --rc genhtml_legend=1 00:10:09.227 --rc geninfo_all_blocks=1 00:10:09.227 --rc geninfo_unexecuted_blocks=1 00:10:09.227 00:10:09.227 ' 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:09.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.227 --rc genhtml_branch_coverage=1 00:10:09.227 --rc genhtml_function_coverage=1 00:10:09.227 --rc genhtml_legend=1 00:10:09.227 --rc geninfo_all_blocks=1 00:10:09.227 --rc geninfo_unexecuted_blocks=1 00:10:09.227 00:10:09.227 ' 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:09.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.227 --rc genhtml_branch_coverage=1 00:10:09.227 --rc genhtml_function_coverage=1 00:10:09.227 --rc genhtml_legend=1 00:10:09.227 --rc geninfo_all_blocks=1 00:10:09.227 --rc geninfo_unexecuted_blocks=1 00:10:09.227 00:10:09.227 ' 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@50 -- # : 0 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:09.227 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:09.228 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.228 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.228 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:09.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:09.228 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:09.228 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:09.228 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:09.228 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:09.228 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@13 -- # TEST_ARGS=("$@") 00:10:09.228 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@15 -- # [[ 0 -eq 0 ]] 00:10:09.228 08:07:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:09.228 08:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:09.228 08:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.228 08:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.228 ************************************ 00:10:09.228 START TEST nvmf_abort 00:10:09.228 ************************************ 00:10:09.228 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:09.488 * Looking for test storage... 00:10:09.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.488 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:09.488 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:10:09.488 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:09.488 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:09.488 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.488 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.488 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.488 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.488 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.488 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:09.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.489 --rc genhtml_branch_coverage=1 00:10:09.489 --rc genhtml_function_coverage=1 00:10:09.489 --rc genhtml_legend=1 00:10:09.489 --rc geninfo_all_blocks=1 00:10:09.489 --rc geninfo_unexecuted_blocks=1 00:10:09.489 00:10:09.489 ' 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:09.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.489 --rc genhtml_branch_coverage=1 00:10:09.489 --rc genhtml_function_coverage=1 00:10:09.489 --rc genhtml_legend=1 00:10:09.489 --rc geninfo_all_blocks=1 00:10:09.489 --rc geninfo_unexecuted_blocks=1 00:10:09.489 00:10:09.489 ' 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:09.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.489 --rc genhtml_branch_coverage=1 00:10:09.489 --rc genhtml_function_coverage=1 00:10:09.489 --rc genhtml_legend=1 00:10:09.489 --rc geninfo_all_blocks=1 00:10:09.489 --rc geninfo_unexecuted_blocks=1 00:10:09.489 00:10:09.489 ' 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:09.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.489 --rc genhtml_branch_coverage=1 00:10:09.489 --rc genhtml_function_coverage=1 00:10:09.489 --rc genhtml_legend=1 00:10:09.489 --rc geninfo_all_blocks=1 00:10:09.489 --rc geninfo_unexecuted_blocks=1 00:10:09.489 00:10:09.489 ' 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:09.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:09.489 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:09.490 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:09.490 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:09.490 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:09.490 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:10:09.490 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:17.629 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:17.629 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:17.629 Found net devices under 0000:31:00.0: cvl_0_0 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:17.629 Found net devices under 0000:31:00.1: cvl_0_1 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.629 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@247 -- # create_target_ns 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:17.630 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:17.892 10.0.0.1 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:17.892 10.0.0.2 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:17.892 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:17.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.683 ms 00:10:17.893 00:10:17.893 --- 10.0.0.1 ping statistics --- 00:10:17.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.893 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:10:17.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:10:17.893 00:10:17.893 --- 10.0.0.2 ping statistics --- 00:10:17.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.893 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:10:17.893 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=1770892 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 1770892 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1770892 ']' 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.155 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:18.155 [2024-11-20 08:07:22.770950] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:10:18.155 [2024-11-20 08:07:22.771017] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.155 [2024-11-20 08:07:22.879281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:18.415 [2024-11-20 08:07:22.933177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.415 [2024-11-20 08:07:22.933227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.415 [2024-11-20 08:07:22.933236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.415 [2024-11-20 08:07:22.933243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.415 [2024-11-20 08:07:22.933250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.415 [2024-11-20 08:07:22.935305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.415 [2024-11-20 08:07:22.935471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.415 [2024-11-20 08:07:22.935472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:18.986 [2024-11-20 08:07:23.606508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:18.986 Malloc0 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:18.986 Delay0 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:18.986 [2024-11-20 08:07:23.682981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.986 08:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:19.247 [2024-11-20 08:07:23.812438] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:21.158 [2024-11-20 08:07:25.882374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c4dc0 is same with the state(6) to be set 00:10:21.419 Initializing NVMe Controllers 00:10:21.419 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:21.419 controller IO queue size 128 less than required 00:10:21.419 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:21.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:21.419 Initialization complete. Launching workers. 00:10:21.419 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 28036 00:10:21.419 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28099, failed to submit 62 00:10:21.419 success 28040, unsuccessful 59, failed 0 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:21.419 rmmod nvme_tcp 00:10:21.419 rmmod nvme_fabrics 00:10:21.419 rmmod nvme_keyring 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 1770892 ']' 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 1770892 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1770892 ']' 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1770892 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.419 08:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1770892 00:10:21.419 08:07:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:21.419 08:07:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:21.419 08:07:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1770892' 00:10:21.419 killing process with pid 1770892 00:10:21.419 08:07:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1770892 00:10:21.419 08:07:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1770892 00:10:21.679 08:07:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:21.679 08:07:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:10:21.679 08:07:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@254 -- # local dev 00:10:21.679 08:07:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@257 -- # remove_target_ns 00:10:21.679 08:07:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:21.679 08:07:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:21.679 08:07:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:23.591 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@258 -- # delete_main_bridge 00:10:23.591 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:23.591 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@121 -- # return 0 00:10:23.591 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:23.591 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:10:23.591 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:10:23.591 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@274 -- # iptr 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # iptables-save 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # iptables-restore 00:10:23.592 00:10:23.592 real 0m14.354s 00:10:23.592 user 0m14.005s 00:10:23.592 sys 0m7.299s 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:23.592 ************************************ 00:10:23.592 END TEST nvmf_abort 00:10:23.592 ************************************ 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@17 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:23.592 ************************************ 00:10:23.592 START TEST nvmf_ns_hotplug_stress 00:10:23.592 ************************************ 00:10:23.592 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:23.853 * Looking for test storage... 00:10:23.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.853 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:23.853 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:10:23.853 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:23.853 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:23.853 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.853 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:23.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.854 --rc genhtml_branch_coverage=1 00:10:23.854 --rc genhtml_function_coverage=1 00:10:23.854 --rc genhtml_legend=1 00:10:23.854 --rc geninfo_all_blocks=1 00:10:23.854 --rc geninfo_unexecuted_blocks=1 00:10:23.854 00:10:23.854 ' 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:23.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.854 --rc genhtml_branch_coverage=1 00:10:23.854 --rc genhtml_function_coverage=1 00:10:23.854 --rc genhtml_legend=1 00:10:23.854 --rc geninfo_all_blocks=1 00:10:23.854 --rc geninfo_unexecuted_blocks=1 00:10:23.854 00:10:23.854 ' 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:23.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.854 --rc genhtml_branch_coverage=1 00:10:23.854 --rc genhtml_function_coverage=1 00:10:23.854 --rc genhtml_legend=1 00:10:23.854 --rc geninfo_all_blocks=1 00:10:23.854 --rc geninfo_unexecuted_blocks=1 00:10:23.854 00:10:23.854 ' 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:23.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.854 --rc genhtml_branch_coverage=1 00:10:23.854 --rc genhtml_function_coverage=1 00:10:23.854 --rc genhtml_legend=1 00:10:23.854 --rc geninfo_all_blocks=1 00:10:23.854 --rc geninfo_unexecuted_blocks=1 00:10:23.854 00:10:23.854 ' 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:23.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:23.854 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:23.855 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:23.855 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:23.855 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:23.855 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.855 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:23.855 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:23.855 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:10:23.855 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:23.855 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:23.855 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:23.855 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:23.855 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:23.855 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:10:23.855 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:31.994 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:31.994 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:31.995 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:31.995 Found net devices under 0000:31:00.0: cvl_0_0 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:31.995 Found net devices under 0000:31:00.1: cvl_0_1 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@247 -- # create_target_ns 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:31.995 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:32.255 10.0.0.1 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:32.255 10.0.0.2 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:32.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.639 ms 00:10:32.255 00:10:32.255 --- 10.0.0.1 ping statistics --- 00:10:32.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.255 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:10:32.255 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:10:32.516 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:10:32.516 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:32.516 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:32.516 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:32.516 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:32.516 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:32.516 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:10:32.516 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:10:32.516 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:32.516 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:10:32.516 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:10:32.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:10:32.516 00:10:32.516 --- 10.0.0.2 ping statistics --- 00:10:32.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.516 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:32.516 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=1776320 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 1776320 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1776320 ']' 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.517 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.517 [2024-11-20 08:07:37.165981] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:10:32.517 [2024-11-20 08:07:37.166036] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.776 [2024-11-20 08:07:37.268813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:32.776 [2024-11-20 08:07:37.315852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.776 [2024-11-20 08:07:37.315908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.776 [2024-11-20 08:07:37.315917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.776 [2024-11-20 08:07:37.315924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.776 [2024-11-20 08:07:37.315930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.776 [2024-11-20 08:07:37.317908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.776 [2024-11-20 08:07:37.318099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.776 [2024-11-20 08:07:37.318100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.401 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.401 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:10:33.401 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:33.401 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:33.401 08:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.401 08:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.401 08:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:33.401 08:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:33.661 [2024-11-20 08:07:38.161332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.661 08:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:33.661 08:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.921 [2024-11-20 08:07:38.530761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.921 08:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:34.180 08:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:34.180 Malloc0 00:10:34.440 08:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:34.440 Delay0 00:10:34.440 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.700 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:34.960 NULL1 00:10:34.960 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:34.960 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:34.960 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1776822 00:10:34.960 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:34.960 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.344 Read completed with error (sct=0, sc=11) 00:10:36.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.344 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.344 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:36.344 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:36.604 true 00:10:36.604 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:36.604 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.546 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.546 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:37.547 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:37.806 true 00:10:37.806 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:37.806 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.806 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.066 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:38.066 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:38.327 true 00:10:38.327 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:38.327 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.327 08:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.587 08:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:38.587 08:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:38.847 true 00:10:38.847 08:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:38.847 08:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.847 08:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.108 08:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:39.108 08:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:39.368 true 00:10:39.368 08:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:39.368 08:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.307 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.567 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:40.567 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:40.828 true 00:10:40.828 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:40.828 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.769 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.769 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:41.769 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:42.028 true 00:10:42.028 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:42.028 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.288 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.288 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:42.288 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:42.549 true 00:10:42.549 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:42.549 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.810 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.810 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:42.810 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:43.070 true 00:10:43.070 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:43.070 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.332 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.332 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:43.332 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:43.594 true 00:10:43.594 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:43.594 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.675 08:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.952 08:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:44.952 08:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:45.213 true 00:10:45.213 08:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:45.213 08:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.831 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.093 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:46.093 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:46.355 true 00:10:46.355 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:46.355 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.616 08:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.616 08:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:46.616 08:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:46.878 true 00:10:46.878 08:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:46.878 08:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.261 08:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.261 08:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:48.261 08:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:48.261 true 00:10:48.261 08:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:48.261 08:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.203 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.463 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:49.463 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:49.463 true 00:10:49.463 08:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:49.463 08:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.722 08:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.982 08:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:49.983 08:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:49.983 true 00:10:49.983 08:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:49.983 08:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.365 08:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.365 08:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:51.365 08:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:51.625 true 00:10:51.625 08:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:51.625 08:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.566 08:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.566 08:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:52.566 08:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:52.826 true 00:10:52.826 08:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:52.826 08:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.085 08:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.085 08:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:53.085 08:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:53.345 true 00:10:53.345 08:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:53.345 08:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.605 08:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.865 08:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:53.865 08:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:53.865 true 00:10:53.865 08:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:53.865 08:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.126 08:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.385 08:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:54.385 08:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:54.385 true 00:10:54.385 08:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:54.385 08:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.774 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.774 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:55.774 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:56.034 true 00:10:56.034 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:56.034 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.973 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.973 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:56.973 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:57.233 true 00:10:57.233 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:57.233 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.233 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.493 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:57.493 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:57.754 true 00:10:57.754 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:57.754 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.014 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.014 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:58.015 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:58.275 true 00:10:58.275 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:58.275 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.536 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.536 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:58.536 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:58.797 true 00:10:58.797 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:10:58.797 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.181 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.181 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:00.181 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:00.181 true 00:11:00.441 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:11:00.441 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.010 08:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.270 08:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:01.270 08:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:01.530 true 00:11:01.530 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:11:01.530 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.790 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.790 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:01.790 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:02.051 true 00:11:02.051 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:11:02.051 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.434 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.434 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:03.434 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:03.694 true 00:11:03.694 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:11:03.694 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.635 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.635 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:04.635 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:04.895 true 00:11:04.895 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:11:04.895 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.895 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.155 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:11:05.155 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:05.155 Initializing NVMe Controllers 00:11:05.155 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:05.155 Controller IO queue size 128, less than required. 00:11:05.155 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:05.155 Controller IO queue size 128, less than required. 00:11:05.155 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:05.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:05.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:05.155 Initialization complete. Launching workers. 00:11:05.155 ======================================================== 00:11:05.155 Latency(us) 00:11:05.155 Device Information : IOPS MiB/s Average min max 00:11:05.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2125.99 1.04 36804.85 2125.98 1052709.75 00:11:05.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18029.91 8.80 7099.49 1433.03 402288.41 00:11:05.155 ======================================================== 00:11:05.155 Total : 20155.90 9.84 10232.74 1433.03 1052709.75 00:11:05.155 00:11:05.416 true 00:11:05.416 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1776822 00:11:05.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1776822) - No such process 00:11:05.416 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1776822 00:11:05.416 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.416 08:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:05.678 08:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:05.678 08:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:05.678 08:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:05.678 08:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:05.678 08:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:05.940 null0 00:11:05.940 08:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:05.940 08:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:05.940 08:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:05.940 null1 00:11:05.940 08:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:05.940 08:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:05.940 08:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:06.201 null2 00:11:06.201 08:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:06.201 08:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:06.201 08:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:06.463 null3 00:11:06.463 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:06.463 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:06.463 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:06.463 null4 00:11:06.722 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:06.722 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:06.722 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:06.722 null5 00:11:06.722 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:06.722 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:06.722 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:06.982 null6 00:11:06.982 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:06.982 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:06.982 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:07.244 null7 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:07.244 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1783190 1783191 1783193 1783196 1783199 1783201 1783204 1783205 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:07.245 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.504 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:07.504 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:07.504 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.504 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:07.764 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.764 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.764 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:07.764 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:07.764 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:07.764 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:07.764 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:07.764 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:07.764 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:07.764 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:07.764 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.025 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.025 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.025 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:08.025 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.025 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.025 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:08.025 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.025 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.025 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:08.025 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.025 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.025 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:08.025 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.025 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.025 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:08.026 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.026 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.026 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:08.026 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.026 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.026 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:08.026 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.026 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.026 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:08.026 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:08.026 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:08.026 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.286 08:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.546 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:08.807 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:08.808 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:08.808 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.068 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:09.069 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.069 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.069 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:09.069 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.069 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.069 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:09.069 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:09.069 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:09.069 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.329 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:09.329 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.329 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.329 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.589 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:09.850 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.111 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:10.372 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.372 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.372 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:10.372 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:10.372 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:10.372 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:10.372 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:10.372 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:10.372 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:10.372 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.372 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:10.372 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.372 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.372 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:10.372 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.372 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.372 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:10.372 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.372 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.372 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:10.634 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:10.918 rmmod nvme_tcp 00:11:10.918 rmmod nvme_fabrics 00:11:10.918 rmmod nvme_keyring 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 1776320 ']' 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 1776320 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1776320 ']' 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1776320 00:11:10.918 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:11:11.178 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.178 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1776320 00:11:11.178 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:11.178 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:11.178 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1776320' 00:11:11.178 killing process with pid 1776320 00:11:11.178 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1776320 00:11:11.178 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1776320 00:11:11.178 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:11.178 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:11:11.178 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@254 -- # local dev 00:11:11.178 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:11.178 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:11.178 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:11.178 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # return 0 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@274 -- # iptr 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-save 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-restore 00:11:13.723 00:11:13.723 real 0m49.598s 00:11:13.723 user 3m10.095s 00:11:13.723 sys 0m16.600s 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.723 ************************************ 00:11:13.723 END TEST nvmf_ns_hotplug_stress 00:11:13.723 ************************************ 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:13.723 ************************************ 00:11:13.723 START TEST nvmf_delete_subsystem 00:11:13.723 ************************************ 00:11:13.723 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:13.723 * Looking for test storage... 00:11:13.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.723 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:13.723 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.724 --rc genhtml_branch_coverage=1 00:11:13.724 --rc genhtml_function_coverage=1 00:11:13.724 --rc genhtml_legend=1 00:11:13.724 --rc geninfo_all_blocks=1 00:11:13.724 --rc geninfo_unexecuted_blocks=1 00:11:13.724 00:11:13.724 ' 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.724 --rc genhtml_branch_coverage=1 00:11:13.724 --rc genhtml_function_coverage=1 00:11:13.724 --rc genhtml_legend=1 00:11:13.724 --rc geninfo_all_blocks=1 00:11:13.724 --rc geninfo_unexecuted_blocks=1 00:11:13.724 00:11:13.724 ' 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.724 --rc genhtml_branch_coverage=1 00:11:13.724 --rc genhtml_function_coverage=1 00:11:13.724 --rc genhtml_legend=1 00:11:13.724 --rc geninfo_all_blocks=1 00:11:13.724 --rc geninfo_unexecuted_blocks=1 00:11:13.724 00:11:13.724 ' 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.724 --rc genhtml_branch_coverage=1 00:11:13.724 --rc genhtml_function_coverage=1 00:11:13.724 --rc genhtml_legend=1 00:11:13.724 --rc geninfo_all_blocks=1 00:11:13.724 --rc geninfo_unexecuted_blocks=1 00:11:13.724 00:11:13.724 ' 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:13.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:13.724 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:13.725 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:13.725 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:13.725 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:13.725 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.725 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:13.725 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:13.725 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:11:13.725 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:13.725 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:13.725 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:13.725 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:13.725 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:13.725 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:11:13.725 08:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:21.977 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:21.978 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:21.978 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:21.978 Found net devices under 0000:31:00.0: cvl_0_0 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:21.978 Found net devices under 0000:31:00.1: cvl_0_1 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@247 -- # create_target_ns 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:21.978 10.0.0.1 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:21.978 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:21.979 10.0.0.2 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:21.979 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:22.239 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:22.239 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:22.239 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:22.239 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:22.239 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:22.239 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:22.239 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:22.239 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:22.239 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:22.239 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:22.239 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:22.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.609 ms 00:11:22.239 00:11:22.239 --- 10.0.0.1 ping statistics --- 00:11:22.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.240 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:11:22.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:11:22.240 00:11:22.240 --- 10.0.0.2 ping statistics --- 00:11:22.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.240 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:22.240 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=1789075 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 1789075 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1789075 ']' 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.241 08:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.241 [2024-11-20 08:08:26.933030] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:11:22.241 [2024-11-20 08:08:26.933079] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.500 [2024-11-20 08:08:27.021467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:22.500 [2024-11-20 08:08:27.056823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.500 [2024-11-20 08:08:27.056859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.500 [2024-11-20 08:08:27.056874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.500 [2024-11-20 08:08:27.056881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.500 [2024-11-20 08:08:27.056886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.500 [2024-11-20 08:08:27.060881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.500 [2024-11-20 08:08:27.060907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.500 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.500 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:11:22.500 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:22.500 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:22.500 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.500 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.500 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.500 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.501 [2024-11-20 08:08:27.183592] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.501 [2024-11-20 08:08:27.199785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.501 NULL1 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.501 Delay0 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.501 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.765 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.765 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1789103 00:11:22.765 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:22.765 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:22.765 [2024-11-20 08:08:27.284595] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:24.680 08:08:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.680 08:08:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.681 08:08:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.943 Write completed with error (sct=0, sc=8) 00:11:24.943 starting I/O failed: -6 00:11:24.943 Write completed with error (sct=0, sc=8) 00:11:24.943 Read completed with error (sct=0, sc=8) 00:11:24.943 Write completed with error (sct=0, sc=8) 00:11:24.943 Read completed with error (sct=0, sc=8) 00:11:24.943 starting I/O failed: -6 00:11:24.943 Write completed with error (sct=0, sc=8) 00:11:24.943 Read completed with error (sct=0, sc=8) 00:11:24.943 Write completed with error (sct=0, sc=8) 00:11:24.943 Read completed with error (sct=0, sc=8) 00:11:24.943 starting I/O failed: -6 00:11:24.943 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 [2024-11-20 08:08:29.410167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bef00 is same with the state(6) to be set 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 starting I/O failed: -6 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 [2024-11-20 08:08:29.413562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb278000c40 is same with the state(6) to be set 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:24.944 Write completed with error (sct=0, sc=8) 00:11:24.944 Read completed with error (sct=0, sc=8) 00:11:25.883 [2024-11-20 08:08:30.384083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c05e0 is same with the state(6) to be set 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 [2024-11-20 08:08:30.413492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bf0e0 is same with the state(6) to be set 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 [2024-11-20 08:08:30.414008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bf4a0 is same with the state(6) to be set 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 [2024-11-20 08:08:30.415462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb27800d7e0 is same with the state(6) to be set 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Write completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 Read completed with error (sct=0, sc=8) 00:11:25.883 [2024-11-20 08:08:30.415897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb27800d020 is same with the state(6) to be set 00:11:25.883 Initializing NVMe Controllers 00:11:25.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:25.883 Controller IO queue size 128, less than required. 00:11:25.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:25.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:25.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:25.883 Initialization complete. Launching workers. 00:11:25.883 ======================================================== 00:11:25.883 Latency(us) 00:11:25.883 Device Information : IOPS MiB/s Average min max 00:11:25.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.71 0.09 883798.58 245.56 1007839.13 00:11:25.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.28 0.08 920254.68 307.68 1010227.98 00:11:25.883 ======================================================== 00:11:25.883 Total : 334.00 0.16 901184.50 245.56 1010227.98 00:11:25.883 00:11:25.883 [2024-11-20 08:08:30.416484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c05e0 (9): Bad file descriptor 00:11:25.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:25.883 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.883 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:25.883 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1789103 00:11:25.883 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1789103 00:11:26.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1789103) - No such process 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1789103 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1789103 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1789103 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.454 [2024-11-20 08:08:30.948609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1789909 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1789909 00:11:26.454 08:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:26.454 [2024-11-20 08:08:31.026096] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:27.024 08:08:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:27.024 08:08:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1789909 00:11:27.024 08:08:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:27.285 08:08:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:27.285 08:08:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1789909 00:11:27.285 08:08:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:27.855 08:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:27.855 08:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1789909 00:11:27.855 08:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:28.424 08:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:28.424 08:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1789909 00:11:28.424 08:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:28.802 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:28.802 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1789909 00:11:28.802 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:29.385 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:29.385 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1789909 00:11:29.385 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:29.645 Initializing NVMe Controllers 00:11:29.645 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:29.645 Controller IO queue size 128, less than required. 00:11:29.645 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:29.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:29.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:29.645 Initialization complete. Launching workers. 00:11:29.645 ======================================================== 00:11:29.645 Latency(us) 00:11:29.645 Device Information : IOPS MiB/s Average min max 00:11:29.645 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002056.51 1000163.74 1007274.51 00:11:29.645 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002935.44 1000274.69 1009340.70 00:11:29.645 ======================================================== 00:11:29.645 Total : 256.00 0.12 1002495.98 1000163.74 1009340.70 00:11:29.645 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1789909 00:11:29.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1789909) - No such process 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1789909 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:29.906 rmmod nvme_tcp 00:11:29.906 rmmod nvme_fabrics 00:11:29.906 rmmod nvme_keyring 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 1789075 ']' 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 1789075 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1789075 ']' 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1789075 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.906 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1789075 00:11:30.166 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.166 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.166 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1789075' 00:11:30.166 killing process with pid 1789075 00:11:30.166 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1789075 00:11:30.166 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1789075 00:11:30.166 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:30.166 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:11:30.166 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@254 -- # local dev 00:11:30.166 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:30.166 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:30.166 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:30.166 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # return 0 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@274 -- # iptr 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-save 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-restore 00:11:32.709 00:11:32.709 real 0m18.864s 00:11:32.709 user 0m29.623s 00:11:32.709 sys 0m7.587s 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.709 ************************************ 00:11:32.709 END TEST nvmf_delete_subsystem 00:11:32.709 ************************************ 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:32.709 ************************************ 00:11:32.709 START TEST nvmf_host_management 00:11:32.709 ************************************ 00:11:32.709 08:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:32.709 * Looking for test storage... 00:11:32.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:32.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.709 --rc genhtml_branch_coverage=1 00:11:32.709 --rc genhtml_function_coverage=1 00:11:32.709 --rc genhtml_legend=1 00:11:32.709 --rc geninfo_all_blocks=1 00:11:32.709 --rc geninfo_unexecuted_blocks=1 00:11:32.709 00:11:32.709 ' 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:32.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.709 --rc genhtml_branch_coverage=1 00:11:32.709 --rc genhtml_function_coverage=1 00:11:32.709 --rc genhtml_legend=1 00:11:32.709 --rc geninfo_all_blocks=1 00:11:32.709 --rc geninfo_unexecuted_blocks=1 00:11:32.709 00:11:32.709 ' 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:32.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.709 --rc genhtml_branch_coverage=1 00:11:32.709 --rc genhtml_function_coverage=1 00:11:32.709 --rc genhtml_legend=1 00:11:32.709 --rc geninfo_all_blocks=1 00:11:32.709 --rc geninfo_unexecuted_blocks=1 00:11:32.709 00:11:32.709 ' 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:32.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.709 --rc genhtml_branch_coverage=1 00:11:32.709 --rc genhtml_function_coverage=1 00:11:32.709 --rc genhtml_legend=1 00:11:32.709 --rc geninfo_all_blocks=1 00:11:32.709 --rc geninfo_unexecuted_blocks=1 00:11:32.709 00:11:32.709 ' 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.709 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:32.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:11:32.710 08:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:40.853 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.853 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:11:40.853 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:40.853 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:40.853 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:40.854 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:40.854 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:40.854 Found net devices under 0000:31:00.0: cvl_0_0 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:40.854 Found net devices under 0000:31:00.1: cvl_0_1 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@247 -- # create_target_ns 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:40.854 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:40.855 10.0.0.1 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:40.855 10.0.0.2 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:40.855 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:41.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.627 ms 00:11:41.118 00:11:41.118 --- 10.0.0.1 ping statistics --- 00:11:41.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.118 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:11:41.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:11:41.118 00:11:41.118 --- 10.0.0.2 ping statistics --- 00:11:41.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.118 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:11:41.118 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=1795510 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 1795510 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1795510 ']' 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.119 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:41.379 [2024-11-20 08:08:45.855659] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:11:41.379 [2024-11-20 08:08:45.855731] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.379 [2024-11-20 08:08:45.966137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.379 [2024-11-20 08:08:46.019441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.379 [2024-11-20 08:08:46.019496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.379 [2024-11-20 08:08:46.019505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.379 [2024-11-20 08:08:46.019512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.379 [2024-11-20 08:08:46.019519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.379 [2024-11-20 08:08:46.021800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.379 [2024-11-20 08:08:46.021966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.379 [2024-11-20 08:08:46.022292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:41.379 [2024-11-20 08:08:46.022296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.950 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.950 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:41.950 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:41.950 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:41.950 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.211 [2024-11-20 08:08:46.710155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.211 Malloc0 00:11:42.211 [2024-11-20 08:08:46.786198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1795876 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1795876 /var/tmp/bdevperf.sock 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1795876 ']' 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:42.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:11:42.211 { 00:11:42.211 "params": { 00:11:42.211 "name": "Nvme$subsystem", 00:11:42.211 "trtype": "$TEST_TRANSPORT", 00:11:42.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:42.211 "adrfam": "ipv4", 00:11:42.211 "trsvcid": "$NVMF_PORT", 00:11:42.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:42.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:42.211 "hdgst": ${hdgst:-false}, 00:11:42.211 "ddgst": ${ddgst:-false} 00:11:42.211 }, 00:11:42.211 "method": "bdev_nvme_attach_controller" 00:11:42.211 } 00:11:42.211 EOF 00:11:42.211 )") 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:11:42.211 08:08:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:11:42.211 "params": { 00:11:42.211 "name": "Nvme0", 00:11:42.211 "trtype": "tcp", 00:11:42.211 "traddr": "10.0.0.2", 00:11:42.211 "adrfam": "ipv4", 00:11:42.211 "trsvcid": "4420", 00:11:42.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:42.211 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:42.211 "hdgst": false, 00:11:42.211 "ddgst": false 00:11:42.211 }, 00:11:42.211 "method": "bdev_nvme_attach_controller" 00:11:42.211 }' 00:11:42.211 [2024-11-20 08:08:46.900380] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:11:42.211 [2024-11-20 08:08:46.900435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795876 ] 00:11:42.472 [2024-11-20 08:08:46.978344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.472 [2024-11-20 08:08:47.014652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.732 Running I/O for 10 seconds... 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.993 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:43.254 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.254 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=708 00:11:43.254 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 708 -ge 100 ']' 00:11:43.254 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:43.254 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:43.254 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:43.254 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:43.254 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.254 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:43.254 [2024-11-20 08:08:47.757332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ce530 is same with the state(6) to be set 00:11:43.254 [2024-11-20 08:08:47.757381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ce530 is same with the state(6) to be set 00:11:43.254 [2024-11-20 08:08:47.760138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.254 [2024-11-20 08:08:47.760177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.254 [2024-11-20 08:08:47.760194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.254 [2024-11-20 08:08:47.760202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.254 [2024-11-20 08:08:47.760212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.254 [2024-11-20 08:08:47.760220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.254 [2024-11-20 08:08:47.760229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.254 [2024-11-20 08:08:47.760237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.254 [2024-11-20 08:08:47.760252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.254 [2024-11-20 08:08:47.760259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.254 [2024-11-20 08:08:47.760269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.254 [2024-11-20 08:08:47.760276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.254 [2024-11-20 08:08:47.760286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.254 [2024-11-20 08:08:47.760293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.254 [2024-11-20 08:08:47.760302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.254 [2024-11-20 08:08:47.760310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.254 [2024-11-20 08:08:47.760319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.254 [2024-11-20 08:08:47.760326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.254 [2024-11-20 08:08:47.760336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.254 [2024-11-20 08:08:47.760343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.254 [2024-11-20 08:08:47.760352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.254 [2024-11-20 08:08:47.760359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.254 [2024-11-20 08:08:47.760368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.254 [2024-11-20 08:08:47.760375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.254 [2024-11-20 08:08:47.760385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.254 [2024-11-20 08:08:47.760392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.254 [2024-11-20 08:08:47.760401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.254 [2024-11-20 08:08:47.760409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.254 [2024-11-20 08:08:47.760418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.760984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.760992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.761001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.761008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.761018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.761025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.761034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.761042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.761051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.255 [2024-11-20 08:08:47.761058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.255 [2024-11-20 08:08:47.761067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.256 [2024-11-20 08:08:47.761075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.256 [2024-11-20 08:08:47.761084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.256 [2024-11-20 08:08:47.761093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.256 [2024-11-20 08:08:47.761102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.256 [2024-11-20 08:08:47.761109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.256 [2024-11-20 08:08:47.761119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.256 [2024-11-20 08:08:47.761126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.256 [2024-11-20 08:08:47.761135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.256 [2024-11-20 08:08:47.761143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.256 [2024-11-20 08:08:47.761152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.256 [2024-11-20 08:08:47.761159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.256 [2024-11-20 08:08:47.761168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.256 [2024-11-20 08:08:47.761175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.256 [2024-11-20 08:08:47.761185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.256 [2024-11-20 08:08:47.761192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.256 [2024-11-20 08:08:47.761202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.256 [2024-11-20 08:08:47.761209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.256 [2024-11-20 08:08:47.761218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.256 [2024-11-20 08:08:47.761225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.256 [2024-11-20 08:08:47.761235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.256 [2024-11-20 08:08:47.761242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.256 [2024-11-20 08:08:47.761251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:43.256 [2024-11-20 08:08:47.761259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.256 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.256 [2024-11-20 08:08:47.762500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:11:43.256 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:43.256 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.256 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:43.256 task offset: 103168 on job bdev=Nvme0n1 fails 00:11:43.256 00:11:43.256 Latency(us) 00:11:43.256 [2024-11-20T07:08:47.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:43.256 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:43.256 Job: Nvme0n1 ended in about 0.44 seconds with error 00:11:43.256 Verification LBA range: start 0x0 length 0x400 00:11:43.256 Nvme0n1 : 0.44 1799.08 112.44 145.93 0.00 31893.94 1542.83 30801.92 00:11:43.256 [2024-11-20T07:08:47.985Z] =================================================================================================================== 00:11:43.256 [2024-11-20T07:08:47.985Z] Total : 1799.08 112.44 145.93 0.00 31893.94 1542.83 30801.92 00:11:43.256 [2024-11-20 08:08:47.764570] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:43.256 [2024-11-20 08:08:47.764594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf88b00 (9): Bad file descriptor 00:11:43.256 [2024-11-20 08:08:47.769118] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:11:43.256 [2024-11-20 08:08:47.769189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:11:43.256 [2024-11-20 08:08:47.769212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.256 [2024-11-20 08:08:47.769224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:11:43.256 [2024-11-20 08:08:47.769232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:11:43.256 [2024-11-20 08:08:47.769239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:11:43.256 [2024-11-20 08:08:47.769246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf88b00 00:11:43.256 [2024-11-20 08:08:47.769265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf88b00 (9): Bad file descriptor 00:11:43.256 [2024-11-20 08:08:47.769279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:11:43.256 [2024-11-20 08:08:47.769286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:11:43.256 [2024-11-20 08:08:47.769295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:11:43.256 [2024-11-20 08:08:47.769304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:11:43.256 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.256 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:44.197 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1795876 00:11:44.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1795876) - No such process 00:11:44.197 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:44.197 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:44.197 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:44.197 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:44.197 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:11:44.197 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:11:44.197 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:11:44.197 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:11:44.197 { 00:11:44.197 "params": { 00:11:44.197 "name": "Nvme$subsystem", 00:11:44.197 "trtype": "$TEST_TRANSPORT", 00:11:44.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:44.197 "adrfam": "ipv4", 00:11:44.197 "trsvcid": "$NVMF_PORT", 00:11:44.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:44.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:44.197 "hdgst": ${hdgst:-false}, 00:11:44.197 "ddgst": ${ddgst:-false} 00:11:44.197 }, 00:11:44.197 "method": "bdev_nvme_attach_controller" 00:11:44.197 } 00:11:44.197 EOF 00:11:44.197 )") 00:11:44.197 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:11:44.197 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:11:44.197 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:11:44.197 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:11:44.197 "params": { 00:11:44.197 "name": "Nvme0", 00:11:44.197 "trtype": "tcp", 00:11:44.197 "traddr": "10.0.0.2", 00:11:44.197 "adrfam": "ipv4", 00:11:44.197 "trsvcid": "4420", 00:11:44.197 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:44.197 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:44.197 "hdgst": false, 00:11:44.197 "ddgst": false 00:11:44.197 }, 00:11:44.197 "method": "bdev_nvme_attach_controller" 00:11:44.197 }' 00:11:44.197 [2024-11-20 08:08:48.836327] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:11:44.197 [2024-11-20 08:08:48.836381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1796236 ] 00:11:44.197 [2024-11-20 08:08:48.914096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.457 [2024-11-20 08:08:48.950551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.717 Running I/O for 1 seconds... 00:11:45.657 1599.00 IOPS, 99.94 MiB/s 00:11:45.658 Latency(us) 00:11:45.658 [2024-11-20T07:08:50.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:45.658 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:45.658 Verification LBA range: start 0x0 length 0x400 00:11:45.658 Nvme0n1 : 1.04 1605.21 100.33 0.00 0.00 39189.35 6062.08 32768.00 00:11:45.658 [2024-11-20T07:08:50.387Z] =================================================================================================================== 00:11:45.658 [2024-11-20T07:08:50.387Z] Total : 1605.21 100.33 0.00 0.00 39189.35 6062.08 32768.00 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:45.917 rmmod nvme_tcp 00:11:45.917 rmmod nvme_fabrics 00:11:45.917 rmmod nvme_keyring 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 1795510 ']' 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 1795510 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1795510 ']' 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1795510 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.917 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1795510 00:11:45.918 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:45.918 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:45.918 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1795510' 00:11:45.918 killing process with pid 1795510 00:11:45.918 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1795510 00:11:45.918 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1795510 00:11:46.178 [2024-11-20 08:08:50.701617] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:46.178 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:46.178 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:11:46.178 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@254 -- # local dev 00:11:46.178 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:46.178 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:46.178 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:46.178 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # return 0 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@274 -- # iptr 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-restore 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-save 00:11:48.089 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:48.350 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:48.350 00:11:48.350 real 0m15.886s 00:11:48.350 user 0m23.956s 00:11:48.350 sys 0m7.488s 00:11:48.350 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.350 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:48.350 ************************************ 00:11:48.350 END TEST nvmf_host_management 00:11:48.350 ************************************ 00:11:48.350 08:08:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:48.350 08:08:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.350 08:08:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.350 08:08:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:48.350 ************************************ 00:11:48.350 START TEST nvmf_lvol 00:11:48.350 ************************************ 00:11:48.350 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:48.350 * Looking for test storage... 00:11:48.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.350 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:48.350 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:11:48.350 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:48.350 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:48.350 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.350 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.350 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.350 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.350 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.350 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.350 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.350 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.350 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.611 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:48.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.612 --rc genhtml_branch_coverage=1 00:11:48.612 --rc genhtml_function_coverage=1 00:11:48.612 --rc genhtml_legend=1 00:11:48.612 --rc geninfo_all_blocks=1 00:11:48.612 --rc geninfo_unexecuted_blocks=1 00:11:48.612 00:11:48.612 ' 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:48.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.612 --rc genhtml_branch_coverage=1 00:11:48.612 --rc genhtml_function_coverage=1 00:11:48.612 --rc genhtml_legend=1 00:11:48.612 --rc geninfo_all_blocks=1 00:11:48.612 --rc geninfo_unexecuted_blocks=1 00:11:48.612 00:11:48.612 ' 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:48.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.612 --rc genhtml_branch_coverage=1 00:11:48.612 --rc genhtml_function_coverage=1 00:11:48.612 --rc genhtml_legend=1 00:11:48.612 --rc geninfo_all_blocks=1 00:11:48.612 --rc geninfo_unexecuted_blocks=1 00:11:48.612 00:11:48.612 ' 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:48.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.612 --rc genhtml_branch_coverage=1 00:11:48.612 --rc genhtml_function_coverage=1 00:11:48.612 --rc genhtml_legend=1 00:11:48.612 --rc geninfo_all_blocks=1 00:11:48.612 --rc geninfo_unexecuted_blocks=1 00:11:48.612 00:11:48.612 ' 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:48.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:11:48.612 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:56.748 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:56.748 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.748 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:56.749 Found net devices under 0000:31:00.0: cvl_0_0 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:56.749 Found net devices under 0000:31:00.1: cvl_0_1 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@247 -- # create_target_ns 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:56.749 10.0.0.1 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:56.749 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:57.012 10.0.0.2 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:57.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.648 ms 00:11:57.012 00:11:57.012 --- 10.0.0.1 ping statistics --- 00:11:57.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.012 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:11:57.012 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:11:57.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:11:57.012 00:11:57.013 --- 10.0.0.2 ping statistics --- 00:11:57.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.013 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:11:57.013 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=1801419 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 1801419 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1801419 ']' 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.274 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.275 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.275 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:57.275 [2024-11-20 08:09:01.872474] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:11:57.275 [2024-11-20 08:09:01.872552] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.275 [2024-11-20 08:09:01.964143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:57.535 [2024-11-20 08:09:02.005254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.535 [2024-11-20 08:09:02.005293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.535 [2024-11-20 08:09:02.005302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.535 [2024-11-20 08:09:02.005308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.535 [2024-11-20 08:09:02.005314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.535 [2024-11-20 08:09:02.006832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.535 [2024-11-20 08:09:02.006975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.535 [2024-11-20 08:09:02.007151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.105 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.105 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:11:58.105 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:58.105 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:58.105 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:58.105 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.105 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:58.365 [2024-11-20 08:09:02.865788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.365 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.626 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:58.626 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.626 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:58.626 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:58.886 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:59.146 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fda0a91e-b6a1-4f91-922c-661510cc2fc6 00:11:59.146 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fda0a91e-b6a1-4f91-922c-661510cc2fc6 lvol 20 00:11:59.146 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3f4b0fd4-50c7-4531-953b-577f9895f36a 00:11:59.146 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:59.406 08:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3f4b0fd4-50c7-4531-953b-577f9895f36a 00:11:59.665 08:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:59.665 [2024-11-20 08:09:04.363156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.925 08:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:59.925 08:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1802120 00:11:59.925 08:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:59.925 08:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:00.867 08:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3f4b0fd4-50c7-4531-953b-577f9895f36a MY_SNAPSHOT 00:12:01.127 08:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7707e0a2-3ea5-4877-ba66-c47da40cb9bb 00:12:01.127 08:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3f4b0fd4-50c7-4531-953b-577f9895f36a 30 00:12:01.388 08:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7707e0a2-3ea5-4877-ba66-c47da40cb9bb MY_CLONE 00:12:01.649 08:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=183dd10e-4b22-4831-9f69-c67d4b45243e 00:12:01.649 08:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 183dd10e-4b22-4831-9f69-c67d4b45243e 00:12:02.220 08:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1802120 00:12:10.355 Initializing NVMe Controllers 00:12:10.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:10.355 Controller IO queue size 128, less than required. 00:12:10.355 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:10.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:10.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:10.355 Initialization complete. Launching workers. 00:12:10.355 ======================================================== 00:12:10.355 Latency(us) 00:12:10.355 Device Information : IOPS MiB/s Average min max 00:12:10.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17869.00 69.80 7165.15 945.35 59993.15 00:12:10.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12263.30 47.90 10439.21 2073.70 47882.58 00:12:10.355 ======================================================== 00:12:10.355 Total : 30132.30 117.70 8497.64 945.35 59993.15 00:12:10.355 00:12:10.355 08:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:10.355 08:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3f4b0fd4-50c7-4531-953b-577f9895f36a 00:12:10.617 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fda0a91e-b6a1-4f91-922c-661510cc2fc6 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:10.877 rmmod nvme_tcp 00:12:10.877 rmmod nvme_fabrics 00:12:10.877 rmmod nvme_keyring 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 1801419 ']' 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 1801419 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1801419 ']' 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1801419 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1801419 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.877 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.878 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1801419' 00:12:10.878 killing process with pid 1801419 00:12:10.878 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1801419 00:12:10.878 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1801419 00:12:11.139 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:11.139 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:12:11.139 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@254 -- # local dev 00:12:11.139 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@257 -- # remove_target_ns 00:12:11.139 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:11.139 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:11.139 08:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:13.053 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@258 -- # delete_main_bridge 00:12:13.053 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:13.053 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # return 0 00:12:13.053 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:13.053 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:13.053 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:13.053 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:12:13.053 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:12:13.053 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:13.053 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@274 -- # iptr 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-save 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-restore 00:12:13.054 00:12:13.054 real 0m24.846s 00:12:13.054 user 1m4.307s 00:12:13.054 sys 0m9.357s 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:13.054 ************************************ 00:12:13.054 END TEST nvmf_lvol 00:12:13.054 ************************************ 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.054 08:09:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:13.316 ************************************ 00:12:13.316 START TEST nvmf_lvs_grow 00:12:13.316 ************************************ 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:13.316 * Looking for test storage... 00:12:13.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.316 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:13.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.316 --rc genhtml_branch_coverage=1 00:12:13.316 --rc genhtml_function_coverage=1 00:12:13.316 --rc genhtml_legend=1 00:12:13.316 --rc geninfo_all_blocks=1 00:12:13.316 --rc geninfo_unexecuted_blocks=1 00:12:13.316 00:12:13.316 ' 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:13.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.316 --rc genhtml_branch_coverage=1 00:12:13.316 --rc genhtml_function_coverage=1 00:12:13.316 --rc genhtml_legend=1 00:12:13.316 --rc geninfo_all_blocks=1 00:12:13.316 --rc geninfo_unexecuted_blocks=1 00:12:13.316 00:12:13.316 ' 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:13.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.316 --rc genhtml_branch_coverage=1 00:12:13.316 --rc genhtml_function_coverage=1 00:12:13.316 --rc genhtml_legend=1 00:12:13.316 --rc geninfo_all_blocks=1 00:12:13.316 --rc geninfo_unexecuted_blocks=1 00:12:13.316 00:12:13.316 ' 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:13.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.316 --rc genhtml_branch_coverage=1 00:12:13.316 --rc genhtml_function_coverage=1 00:12:13.316 --rc genhtml_legend=1 00:12:13.316 --rc geninfo_all_blocks=1 00:12:13.316 --rc geninfo_unexecuted_blocks=1 00:12:13.316 00:12:13.316 ' 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.316 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:13.317 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:13.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:12:13.578 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:21.718 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:21.718 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:21.718 Found net devices under 0000:31:00.0: cvl_0_0 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:21.718 Found net devices under 0000:31:00.1: cvl_0_1 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@247 -- # create_target_ns 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:21.718 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:21.719 10.0.0.1 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:21.719 10.0.0.2 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:21.719 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:21.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.661 ms 00:12:21.981 00:12:21.981 --- 10.0.0.1 ping statistics --- 00:12:21.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.981 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:21.981 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:12:21.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:12:21.982 00:12:21.982 --- 10.0.0.2 ping statistics --- 00:12:21.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.982 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.982 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=1809638 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 1809638 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1809638 ']' 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.983 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:22.244 [2024-11-20 08:09:26.726226] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:12:22.244 [2024-11-20 08:09:26.726274] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.244 [2024-11-20 08:09:26.811629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.244 [2024-11-20 08:09:26.846197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.244 [2024-11-20 08:09:26.846230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.244 [2024-11-20 08:09:26.846238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.244 [2024-11-20 08:09:26.846244] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.244 [2024-11-20 08:09:26.846250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.244 [2024-11-20 08:09:26.846823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.815 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.815 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:12:22.815 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:22.815 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:22.815 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:23.076 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.076 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:23.076 [2024-11-20 08:09:27.723154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.076 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:23.076 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.076 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.076 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:23.076 ************************************ 00:12:23.076 START TEST lvs_grow_clean 00:12:23.076 ************************************ 00:12:23.076 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:12:23.076 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:23.076 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:23.076 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:23.076 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:23.076 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:23.076 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:23.076 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:23.076 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:23.336 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:23.336 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:23.336 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:23.596 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5b7b7e94-f2c6-4511-8a2f-5e99fcabc24e 00:12:23.596 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b7b7e94-f2c6-4511-8a2f-5e99fcabc24e 00:12:23.596 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:23.856 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:23.856 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:23.856 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5b7b7e94-f2c6-4511-8a2f-5e99fcabc24e lvol 150 00:12:23.856 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8ca0cb9c-5edc-4b9b-8277-2294d7f031d9 00:12:23.856 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:23.856 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:24.116 [2024-11-20 08:09:28.650468] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:24.116 [2024-11-20 08:09:28.650520] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:24.116 true 00:12:24.116 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b7b7e94-f2c6-4511-8a2f-5e99fcabc24e 00:12:24.116 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:24.376 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:24.376 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:24.376 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8ca0cb9c-5edc-4b9b-8277-2294d7f031d9 00:12:24.642 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:24.642 [2024-11-20 08:09:29.340574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.642 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:24.902 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1810097 00:12:24.902 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:24.902 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:24.902 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1810097 /var/tmp/bdevperf.sock 00:12:24.902 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1810097 ']' 00:12:24.902 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:24.902 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.902 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:24.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:24.902 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.902 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:24.903 [2024-11-20 08:09:29.565888] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:12:24.903 [2024-11-20 08:09:29.565942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1810097 ] 00:12:25.162 [2024-11-20 08:09:29.660800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.162 [2024-11-20 08:09:29.697014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.732 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.732 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:12:25.732 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:25.992 Nvme0n1 00:12:25.992 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:26.253 [ 00:12:26.253 { 00:12:26.254 "name": "Nvme0n1", 00:12:26.254 "aliases": [ 00:12:26.254 "8ca0cb9c-5edc-4b9b-8277-2294d7f031d9" 00:12:26.254 ], 00:12:26.254 "product_name": "NVMe disk", 00:12:26.254 "block_size": 4096, 00:12:26.254 "num_blocks": 38912, 00:12:26.254 "uuid": "8ca0cb9c-5edc-4b9b-8277-2294d7f031d9", 00:12:26.254 "numa_id": 0, 00:12:26.254 "assigned_rate_limits": { 00:12:26.254 "rw_ios_per_sec": 0, 00:12:26.254 "rw_mbytes_per_sec": 0, 00:12:26.254 "r_mbytes_per_sec": 0, 00:12:26.254 "w_mbytes_per_sec": 0 00:12:26.254 }, 00:12:26.254 "claimed": false, 00:12:26.254 "zoned": false, 00:12:26.254 "supported_io_types": { 00:12:26.254 "read": true, 00:12:26.254 "write": true, 00:12:26.254 "unmap": true, 00:12:26.254 "flush": true, 00:12:26.254 "reset": true, 00:12:26.254 "nvme_admin": true, 00:12:26.254 "nvme_io": true, 00:12:26.254 "nvme_io_md": false, 00:12:26.254 "write_zeroes": true, 00:12:26.254 "zcopy": false, 00:12:26.254 "get_zone_info": false, 00:12:26.254 "zone_management": false, 00:12:26.254 "zone_append": false, 00:12:26.254 "compare": true, 00:12:26.254 "compare_and_write": true, 00:12:26.254 "abort": true, 00:12:26.254 "seek_hole": false, 00:12:26.254 "seek_data": false, 00:12:26.254 "copy": true, 00:12:26.254 "nvme_iov_md": false 00:12:26.254 }, 00:12:26.254 "memory_domains": [ 00:12:26.254 { 00:12:26.254 "dma_device_id": "system", 00:12:26.254 "dma_device_type": 1 00:12:26.254 } 00:12:26.254 ], 00:12:26.254 "driver_specific": { 00:12:26.254 "nvme": [ 00:12:26.254 { 00:12:26.254 "trid": { 00:12:26.254 "trtype": "TCP", 00:12:26.254 "adrfam": "IPv4", 00:12:26.254 "traddr": "10.0.0.2", 00:12:26.254 "trsvcid": "4420", 00:12:26.254 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:26.254 }, 00:12:26.254 "ctrlr_data": { 00:12:26.254 "cntlid": 1, 00:12:26.254 "vendor_id": "0x8086", 00:12:26.254 "model_number": "SPDK bdev Controller", 00:12:26.254 "serial_number": "SPDK0", 00:12:26.254 "firmware_revision": "25.01", 00:12:26.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:26.254 "oacs": { 00:12:26.254 "security": 0, 00:12:26.254 "format": 0, 00:12:26.254 "firmware": 0, 00:12:26.254 "ns_manage": 0 00:12:26.254 }, 00:12:26.254 "multi_ctrlr": true, 00:12:26.254 "ana_reporting": false 00:12:26.254 }, 00:12:26.254 "vs": { 00:12:26.254 "nvme_version": "1.3" 00:12:26.254 }, 00:12:26.254 "ns_data": { 00:12:26.254 "id": 1, 00:12:26.254 "can_share": true 00:12:26.254 } 00:12:26.254 } 00:12:26.254 ], 00:12:26.254 "mp_policy": "active_passive" 00:12:26.254 } 00:12:26.254 } 00:12:26.254 ] 00:12:26.254 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1810387 00:12:26.254 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:26.254 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:26.254 Running I/O for 10 seconds... 00:12:27.194 Latency(us) 00:12:27.194 [2024-11-20T07:09:31.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:27.194 Nvme0n1 : 1.00 17355.00 67.79 0.00 0.00 0.00 0.00 0.00 00:12:27.194 [2024-11-20T07:09:31.923Z] =================================================================================================================== 00:12:27.194 [2024-11-20T07:09:31.923Z] Total : 17355.00 67.79 0.00 0.00 0.00 0.00 0.00 00:12:27.194 00:12:28.136 08:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5b7b7e94-f2c6-4511-8a2f-5e99fcabc24e 00:12:28.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:28.397 Nvme0n1 : 2.00 17441.50 68.13 0.00 0.00 0.00 0.00 0.00 00:12:28.397 [2024-11-20T07:09:33.126Z] =================================================================================================================== 00:12:28.397 [2024-11-20T07:09:33.126Z] Total : 17441.50 68.13 0.00 0.00 0.00 0.00 0.00 00:12:28.397 00:12:28.397 true 00:12:28.397 08:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b7b7e94-f2c6-4511-8a2f-5e99fcabc24e 00:12:28.397 08:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:28.397 08:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:28.397 08:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:28.397 08:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1810387 00:12:29.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:29.338 Nvme0n1 : 3.00 17470.33 68.24 0.00 0.00 0.00 0.00 0.00 00:12:29.338 [2024-11-20T07:09:34.067Z] =================================================================================================================== 00:12:29.338 [2024-11-20T07:09:34.067Z] Total : 17470.33 68.24 0.00 0.00 0.00 0.00 0.00 00:12:29.338 00:12:30.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:30.280 Nvme0n1 : 4.00 17502.75 68.37 0.00 0.00 0.00 0.00 0.00 00:12:30.280 [2024-11-20T07:09:35.009Z] =================================================================================================================== 00:12:30.280 [2024-11-20T07:09:35.009Z] Total : 17502.75 68.37 0.00 0.00 0.00 0.00 0.00 00:12:30.280 00:12:31.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.218 Nvme0n1 : 5.00 17528.60 68.47 0.00 0.00 0.00 0.00 0.00 00:12:31.218 [2024-11-20T07:09:35.947Z] =================================================================================================================== 00:12:31.218 [2024-11-20T07:09:35.947Z] Total : 17528.60 68.47 0.00 0.00 0.00 0.00 0.00 00:12:31.218 00:12:32.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:32.159 Nvme0n1 : 6.00 17547.17 68.54 0.00 0.00 0.00 0.00 0.00 00:12:32.159 [2024-11-20T07:09:36.888Z] =================================================================================================================== 00:12:32.159 [2024-11-20T07:09:36.889Z] Total : 17547.17 68.54 0.00 0.00 0.00 0.00 0.00 00:12:32.160 00:12:33.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:33.546 Nvme0n1 : 7.00 17566.14 68.62 0.00 0.00 0.00 0.00 0.00 00:12:33.546 [2024-11-20T07:09:38.275Z] =================================================================================================================== 00:12:33.546 [2024-11-20T07:09:38.275Z] Total : 17566.14 68.62 0.00 0.00 0.00 0.00 0.00 00:12:33.546 00:12:34.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.487 Nvme0n1 : 8.00 17580.38 68.67 0.00 0.00 0.00 0.00 0.00 00:12:34.487 [2024-11-20T07:09:39.216Z] =================================================================================================================== 00:12:34.487 [2024-11-20T07:09:39.216Z] Total : 17580.38 68.67 0.00 0.00 0.00 0.00 0.00 00:12:34.487 00:12:35.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:35.430 Nvme0n1 : 9.00 17592.33 68.72 0.00 0.00 0.00 0.00 0.00 00:12:35.430 [2024-11-20T07:09:40.159Z] =================================================================================================================== 00:12:35.430 [2024-11-20T07:09:40.159Z] Total : 17592.33 68.72 0.00 0.00 0.00 0.00 0.00 00:12:35.430 00:12:36.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:36.370 Nvme0n1 : 10.00 17600.30 68.75 0.00 0.00 0.00 0.00 0.00 00:12:36.370 [2024-11-20T07:09:41.099Z] =================================================================================================================== 00:12:36.370 [2024-11-20T07:09:41.099Z] Total : 17600.30 68.75 0.00 0.00 0.00 0.00 0.00 00:12:36.370 00:12:36.370 00:12:36.370 Latency(us) 00:12:36.370 [2024-11-20T07:09:41.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:36.370 Nvme0n1 : 10.01 17600.75 68.75 0.00 0.00 7267.31 3099.31 10485.76 00:12:36.370 [2024-11-20T07:09:41.099Z] =================================================================================================================== 00:12:36.370 [2024-11-20T07:09:41.099Z] Total : 17600.75 68.75 0.00 0.00 7267.31 3099.31 10485.76 00:12:36.370 { 00:12:36.370 "results": [ 00:12:36.370 { 00:12:36.370 "job": "Nvme0n1", 00:12:36.370 "core_mask": "0x2", 00:12:36.370 "workload": "randwrite", 00:12:36.370 "status": "finished", 00:12:36.370 "queue_depth": 128, 00:12:36.370 "io_size": 4096, 00:12:36.370 "runtime": 10.007019, 00:12:36.370 "iops": 17600.746036357083, 00:12:36.370 "mibps": 68.75291420451985, 00:12:36.370 "io_failed": 0, 00:12:36.370 "io_timeout": 0, 00:12:36.370 "avg_latency_us": 7267.31447116067, 00:12:36.370 "min_latency_us": 3099.306666666667, 00:12:36.370 "max_latency_us": 10485.76 00:12:36.370 } 00:12:36.370 ], 00:12:36.370 "core_count": 1 00:12:36.370 } 00:12:36.370 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1810097 00:12:36.370 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1810097 ']' 00:12:36.370 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1810097 00:12:36.370 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:12:36.370 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.370 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1810097 00:12:36.370 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:36.370 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:36.370 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1810097' 00:12:36.370 killing process with pid 1810097 00:12:36.370 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1810097 00:12:36.370 Received shutdown signal, test time was about 10.000000 seconds 00:12:36.370 00:12:36.370 Latency(us) 00:12:36.370 [2024-11-20T07:09:41.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.370 [2024-11-20T07:09:41.099Z] =================================================================================================================== 00:12:36.370 [2024-11-20T07:09:41.099Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:36.370 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1810097 00:12:36.370 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:36.631 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:36.891 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b7b7e94-f2c6-4511-8a2f-5e99fcabc24e 00:12:36.891 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:37.152 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:37.152 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:37.152 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:37.152 [2024-11-20 08:09:41.824127] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:37.412 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b7b7e94-f2c6-4511-8a2f-5e99fcabc24e 00:12:37.412 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:12:37.412 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b7b7e94-f2c6-4511-8a2f-5e99fcabc24e 00:12:37.412 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.412 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.412 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.412 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.412 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.412 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.412 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.412 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:37.412 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b7b7e94-f2c6-4511-8a2f-5e99fcabc24e 00:12:37.412 request: 00:12:37.412 { 00:12:37.412 "uuid": "5b7b7e94-f2c6-4511-8a2f-5e99fcabc24e", 00:12:37.412 "method": "bdev_lvol_get_lvstores", 00:12:37.412 "req_id": 1 00:12:37.412 } 00:12:37.412 Got JSON-RPC error response 00:12:37.412 response: 00:12:37.412 { 00:12:37.412 "code": -19, 00:12:37.412 "message": "No such device" 00:12:37.412 } 00:12:37.412 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:12:37.412 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:37.412 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:37.412 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:37.412 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:37.672 aio_bdev 00:12:37.672 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8ca0cb9c-5edc-4b9b-8277-2294d7f031d9 00:12:37.672 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=8ca0cb9c-5edc-4b9b-8277-2294d7f031d9 00:12:37.672 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.672 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:12:37.672 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.672 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.672 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:37.932 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8ca0cb9c-5edc-4b9b-8277-2294d7f031d9 -t 2000 00:12:37.932 [ 00:12:37.932 { 00:12:37.932 "name": "8ca0cb9c-5edc-4b9b-8277-2294d7f031d9", 00:12:37.932 "aliases": [ 00:12:37.932 "lvs/lvol" 00:12:37.932 ], 00:12:37.932 "product_name": "Logical Volume", 00:12:37.932 "block_size": 4096, 00:12:37.932 "num_blocks": 38912, 00:12:37.932 "uuid": "8ca0cb9c-5edc-4b9b-8277-2294d7f031d9", 00:12:37.932 "assigned_rate_limits": { 00:12:37.932 "rw_ios_per_sec": 0, 00:12:37.932 "rw_mbytes_per_sec": 0, 00:12:37.932 "r_mbytes_per_sec": 0, 00:12:37.932 "w_mbytes_per_sec": 0 00:12:37.932 }, 00:12:37.932 "claimed": false, 00:12:37.932 "zoned": false, 00:12:37.932 "supported_io_types": { 00:12:37.932 "read": true, 00:12:37.932 "write": true, 00:12:37.932 "unmap": true, 00:12:37.932 "flush": false, 00:12:37.932 "reset": true, 00:12:37.932 "nvme_admin": false, 00:12:37.932 "nvme_io": false, 00:12:37.932 "nvme_io_md": false, 00:12:37.932 "write_zeroes": true, 00:12:37.932 "zcopy": false, 00:12:37.932 "get_zone_info": false, 00:12:37.932 "zone_management": false, 00:12:37.932 "zone_append": false, 00:12:37.932 "compare": false, 00:12:37.932 "compare_and_write": false, 00:12:37.932 "abort": false, 00:12:37.932 "seek_hole": true, 00:12:37.932 "seek_data": true, 00:12:37.932 "copy": false, 00:12:37.932 "nvme_iov_md": false 00:12:37.932 }, 00:12:37.932 "driver_specific": { 00:12:37.932 "lvol": { 00:12:37.932 "lvol_store_uuid": "5b7b7e94-f2c6-4511-8a2f-5e99fcabc24e", 00:12:37.932 "base_bdev": "aio_bdev", 00:12:37.932 "thin_provision": false, 00:12:37.932 "num_allocated_clusters": 38, 00:12:37.932 "snapshot": false, 00:12:37.932 "clone": false, 00:12:37.932 "esnap_clone": false 00:12:37.932 } 00:12:37.932 } 00:12:37.932 } 00:12:37.932 ] 00:12:37.932 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:12:37.932 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b7b7e94-f2c6-4511-8a2f-5e99fcabc24e 00:12:37.932 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:38.192 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:38.192 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b7b7e94-f2c6-4511-8a2f-5e99fcabc24e 00:12:38.192 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:38.192 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:38.192 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8ca0cb9c-5edc-4b9b-8277-2294d7f031d9 00:12:38.453 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5b7b7e94-f2c6-4511-8a2f-5e99fcabc24e 00:12:38.713 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:38.713 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:38.974 00:12:38.974 real 0m15.677s 00:12:38.974 user 0m15.328s 00:12:38.974 sys 0m1.339s 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:38.974 ************************************ 00:12:38.974 END TEST lvs_grow_clean 00:12:38.974 ************************************ 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:38.974 ************************************ 00:12:38.974 START TEST lvs_grow_dirty 00:12:38.974 ************************************ 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:38.974 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:39.235 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:39.235 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:39.235 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=000352ab-67ef-4fe7-b21e-8ff55917b7f5 00:12:39.235 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 000352ab-67ef-4fe7-b21e-8ff55917b7f5 00:12:39.235 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:39.494 08:09:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:39.494 08:09:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:39.494 08:09:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 000352ab-67ef-4fe7-b21e-8ff55917b7f5 lvol 150 00:12:39.754 08:09:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8ef94f81-50c2-412b-ad0f-802abad1b0f8 00:12:39.754 08:09:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:39.754 08:09:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:39.754 [2024-11-20 08:09:44.410503] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:39.754 [2024-11-20 08:09:44.410556] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:39.754 true 00:12:39.754 08:09:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 000352ab-67ef-4fe7-b21e-8ff55917b7f5 00:12:39.754 08:09:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:40.014 08:09:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:40.014 08:09:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:40.274 08:09:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8ef94f81-50c2-412b-ad0f-802abad1b0f8 00:12:40.274 08:09:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:40.535 [2024-11-20 08:09:45.068520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.535 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:40.535 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1813349 00:12:40.535 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:40.535 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1813349 /var/tmp/bdevperf.sock 00:12:40.535 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1813349 ']' 00:12:40.535 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:40.535 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.535 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:40.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:40.535 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.535 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:40.535 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:40.795 [2024-11-20 08:09:45.303493] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:12:40.795 [2024-11-20 08:09:45.303544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1813349 ] 00:12:40.795 [2024-11-20 08:09:45.391776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.795 [2024-11-20 08:09:45.421746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.364 08:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.364 08:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:41.364 08:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:41.934 Nvme0n1 00:12:41.934 08:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:41.934 [ 00:12:41.934 { 00:12:41.934 "name": "Nvme0n1", 00:12:41.934 "aliases": [ 00:12:41.934 "8ef94f81-50c2-412b-ad0f-802abad1b0f8" 00:12:41.934 ], 00:12:41.934 "product_name": "NVMe disk", 00:12:41.934 "block_size": 4096, 00:12:41.934 "num_blocks": 38912, 00:12:41.934 "uuid": "8ef94f81-50c2-412b-ad0f-802abad1b0f8", 00:12:41.934 "numa_id": 0, 00:12:41.934 "assigned_rate_limits": { 00:12:41.934 "rw_ios_per_sec": 0, 00:12:41.934 "rw_mbytes_per_sec": 0, 00:12:41.934 "r_mbytes_per_sec": 0, 00:12:41.934 "w_mbytes_per_sec": 0 00:12:41.934 }, 00:12:41.934 "claimed": false, 00:12:41.934 "zoned": false, 00:12:41.934 "supported_io_types": { 00:12:41.934 "read": true, 00:12:41.934 "write": true, 00:12:41.934 "unmap": true, 00:12:41.934 "flush": true, 00:12:41.934 "reset": true, 00:12:41.934 "nvme_admin": true, 00:12:41.934 "nvme_io": true, 00:12:41.934 "nvme_io_md": false, 00:12:41.934 "write_zeroes": true, 00:12:41.934 "zcopy": false, 00:12:41.934 "get_zone_info": false, 00:12:41.934 "zone_management": false, 00:12:41.934 "zone_append": false, 00:12:41.934 "compare": true, 00:12:41.934 "compare_and_write": true, 00:12:41.934 "abort": true, 00:12:41.934 "seek_hole": false, 00:12:41.934 "seek_data": false, 00:12:41.934 "copy": true, 00:12:41.934 "nvme_iov_md": false 00:12:41.934 }, 00:12:41.934 "memory_domains": [ 00:12:41.934 { 00:12:41.934 "dma_device_id": "system", 00:12:41.934 "dma_device_type": 1 00:12:41.934 } 00:12:41.934 ], 00:12:41.934 "driver_specific": { 00:12:41.934 "nvme": [ 00:12:41.934 { 00:12:41.934 "trid": { 00:12:41.934 "trtype": "TCP", 00:12:41.934 "adrfam": "IPv4", 00:12:41.934 "traddr": "10.0.0.2", 00:12:41.934 "trsvcid": "4420", 00:12:41.934 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:41.934 }, 00:12:41.934 "ctrlr_data": { 00:12:41.934 "cntlid": 1, 00:12:41.934 "vendor_id": "0x8086", 00:12:41.934 "model_number": "SPDK bdev Controller", 00:12:41.934 "serial_number": "SPDK0", 00:12:41.934 "firmware_revision": "25.01", 00:12:41.934 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:41.934 "oacs": { 00:12:41.934 "security": 0, 00:12:41.934 "format": 0, 00:12:41.934 "firmware": 0, 00:12:41.934 "ns_manage": 0 00:12:41.934 }, 00:12:41.934 "multi_ctrlr": true, 00:12:41.934 "ana_reporting": false 00:12:41.934 }, 00:12:41.934 "vs": { 00:12:41.934 "nvme_version": "1.3" 00:12:41.934 }, 00:12:41.934 "ns_data": { 00:12:41.934 "id": 1, 00:12:41.934 "can_share": true 00:12:41.934 } 00:12:41.934 } 00:12:41.934 ], 00:12:41.934 "mp_policy": "active_passive" 00:12:41.934 } 00:12:41.934 } 00:12:41.934 ] 00:12:41.934 08:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1813531 00:12:41.934 08:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:41.934 08:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:42.195 Running I/O for 10 seconds... 00:12:43.133 Latency(us) 00:12:43.133 [2024-11-20T07:09:47.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:43.133 Nvme0n1 : 1.00 17720.00 69.22 0.00 0.00 0.00 0.00 0.00 00:12:43.133 [2024-11-20T07:09:47.862Z] =================================================================================================================== 00:12:43.133 [2024-11-20T07:09:47.862Z] Total : 17720.00 69.22 0.00 0.00 0.00 0.00 0.00 00:12:43.133 00:12:44.075 08:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 000352ab-67ef-4fe7-b21e-8ff55917b7f5 00:12:44.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:44.075 Nvme0n1 : 2.00 17898.00 69.91 0.00 0.00 0.00 0.00 0.00 00:12:44.075 [2024-11-20T07:09:48.804Z] =================================================================================================================== 00:12:44.075 [2024-11-20T07:09:48.804Z] Total : 17898.00 69.91 0.00 0.00 0.00 0.00 0.00 00:12:44.075 00:12:44.334 true 00:12:44.334 08:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 000352ab-67ef-4fe7-b21e-8ff55917b7f5 00:12:44.334 08:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:44.334 08:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:44.334 08:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:44.334 08:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1813531 00:12:45.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:45.274 Nvme0n1 : 3.00 17917.00 69.99 0.00 0.00 0.00 0.00 0.00 00:12:45.274 [2024-11-20T07:09:50.003Z] =================================================================================================================== 00:12:45.274 [2024-11-20T07:09:50.003Z] Total : 17917.00 69.99 0.00 0.00 0.00 0.00 0.00 00:12:45.274 00:12:46.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:46.216 Nvme0n1 : 4.00 17943.00 70.09 0.00 0.00 0.00 0.00 0.00 00:12:46.216 [2024-11-20T07:09:50.945Z] =================================================================================================================== 00:12:46.216 [2024-11-20T07:09:50.945Z] Total : 17943.00 70.09 0.00 0.00 0.00 0.00 0.00 00:12:46.216 00:12:47.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.156 Nvme0n1 : 5.00 17979.60 70.23 0.00 0.00 0.00 0.00 0.00 00:12:47.156 [2024-11-20T07:09:51.885Z] =================================================================================================================== 00:12:47.156 [2024-11-20T07:09:51.885Z] Total : 17979.60 70.23 0.00 0.00 0.00 0.00 0.00 00:12:47.156 00:12:48.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:48.097 Nvme0n1 : 6.00 18006.33 70.34 0.00 0.00 0.00 0.00 0.00 00:12:48.097 [2024-11-20T07:09:52.826Z] =================================================================================================================== 00:12:48.097 [2024-11-20T07:09:52.826Z] Total : 18006.33 70.34 0.00 0.00 0.00 0.00 0.00 00:12:48.097 00:12:49.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:49.038 Nvme0n1 : 7.00 18034.00 70.45 0.00 0.00 0.00 0.00 0.00 00:12:49.038 [2024-11-20T07:09:53.767Z] =================================================================================================================== 00:12:49.038 [2024-11-20T07:09:53.767Z] Total : 18034.00 70.45 0.00 0.00 0.00 0.00 0.00 00:12:49.038 00:12:50.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:50.420 Nvme0n1 : 8.00 18043.62 70.48 0.00 0.00 0.00 0.00 0.00 00:12:50.420 [2024-11-20T07:09:55.149Z] =================================================================================================================== 00:12:50.420 [2024-11-20T07:09:55.149Z] Total : 18043.62 70.48 0.00 0.00 0.00 0.00 0.00 00:12:50.420 00:12:51.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:51.363 Nvme0n1 : 9.00 18056.56 70.53 0.00 0.00 0.00 0.00 0.00 00:12:51.363 [2024-11-20T07:09:56.092Z] =================================================================================================================== 00:12:51.363 [2024-11-20T07:09:56.092Z] Total : 18056.56 70.53 0.00 0.00 0.00 0.00 0.00 00:12:51.363 00:12:52.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:52.306 Nvme0n1 : 10.00 18065.20 70.57 0.00 0.00 0.00 0.00 0.00 00:12:52.306 [2024-11-20T07:09:57.035Z] =================================================================================================================== 00:12:52.306 [2024-11-20T07:09:57.035Z] Total : 18065.20 70.57 0.00 0.00 0.00 0.00 0.00 00:12:52.306 00:12:52.306 00:12:52.306 Latency(us) 00:12:52.306 [2024-11-20T07:09:57.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:52.306 Nvme0n1 : 10.01 18067.50 70.58 0.00 0.00 7081.64 2990.08 13817.17 00:12:52.306 [2024-11-20T07:09:57.035Z] =================================================================================================================== 00:12:52.306 [2024-11-20T07:09:57.035Z] Total : 18067.50 70.58 0.00 0.00 7081.64 2990.08 13817.17 00:12:52.306 { 00:12:52.306 "results": [ 00:12:52.306 { 00:12:52.306 "job": "Nvme0n1", 00:12:52.306 "core_mask": "0x2", 00:12:52.306 "workload": "randwrite", 00:12:52.306 "status": "finished", 00:12:52.306 "queue_depth": 128, 00:12:52.306 "io_size": 4096, 00:12:52.306 "runtime": 10.005814, 00:12:52.306 "iops": 18067.49555808253, 00:12:52.306 "mibps": 70.57615452375988, 00:12:52.306 "io_failed": 0, 00:12:52.306 "io_timeout": 0, 00:12:52.306 "avg_latency_us": 7081.638399527972, 00:12:52.306 "min_latency_us": 2990.08, 00:12:52.306 "max_latency_us": 13817.173333333334 00:12:52.306 } 00:12:52.306 ], 00:12:52.306 "core_count": 1 00:12:52.306 } 00:12:52.306 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1813349 00:12:52.306 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1813349 ']' 00:12:52.306 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1813349 00:12:52.306 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:12:52.306 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.306 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1813349 00:12:52.306 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:52.306 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:52.306 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1813349' 00:12:52.307 killing process with pid 1813349 00:12:52.307 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1813349 00:12:52.307 Received shutdown signal, test time was about 10.000000 seconds 00:12:52.307 00:12:52.307 Latency(us) 00:12:52.307 [2024-11-20T07:09:57.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.307 [2024-11-20T07:09:57.036Z] =================================================================================================================== 00:12:52.307 [2024-11-20T07:09:57.036Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:52.307 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1813349 00:12:52.307 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:52.568 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:52.828 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 000352ab-67ef-4fe7-b21e-8ff55917b7f5 00:12:52.828 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:52.828 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:52.828 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:52.828 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1809638 00:12:52.828 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1809638 00:12:52.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1809638 Killed "${NVMF_APP[@]}" "$@" 00:12:52.829 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:52.829 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:52.829 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:52.829 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:52.829 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:52.829 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=1815834 00:12:52.829 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 1815834 00:12:52.829 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:52.829 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1815834 ']' 00:12:52.829 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.829 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.829 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.829 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.829 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:53.089 [2024-11-20 08:09:57.568076] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:12:53.089 [2024-11-20 08:09:57.568132] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.089 [2024-11-20 08:09:57.653278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.089 [2024-11-20 08:09:57.688467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.089 [2024-11-20 08:09:57.688501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.089 [2024-11-20 08:09:57.688509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.089 [2024-11-20 08:09:57.688516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.089 [2024-11-20 08:09:57.688521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.089 [2024-11-20 08:09:57.689108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.659 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.659 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:53.659 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:53.659 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:53.659 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:53.919 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.919 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:53.919 [2024-11-20 08:09:58.547635] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:53.920 [2024-11-20 08:09:58.547723] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:53.920 [2024-11-20 08:09:58.547754] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:53.920 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:53.920 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8ef94f81-50c2-412b-ad0f-802abad1b0f8 00:12:53.920 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8ef94f81-50c2-412b-ad0f-802abad1b0f8 00:12:53.920 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:53.920 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:53.920 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:53.920 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:53.920 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:54.180 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8ef94f81-50c2-412b-ad0f-802abad1b0f8 -t 2000 00:12:54.180 [ 00:12:54.180 { 00:12:54.180 "name": "8ef94f81-50c2-412b-ad0f-802abad1b0f8", 00:12:54.180 "aliases": [ 00:12:54.180 "lvs/lvol" 00:12:54.180 ], 00:12:54.180 "product_name": "Logical Volume", 00:12:54.180 "block_size": 4096, 00:12:54.180 "num_blocks": 38912, 00:12:54.180 "uuid": "8ef94f81-50c2-412b-ad0f-802abad1b0f8", 00:12:54.180 "assigned_rate_limits": { 00:12:54.180 "rw_ios_per_sec": 0, 00:12:54.180 "rw_mbytes_per_sec": 0, 00:12:54.180 "r_mbytes_per_sec": 0, 00:12:54.180 "w_mbytes_per_sec": 0 00:12:54.180 }, 00:12:54.180 "claimed": false, 00:12:54.180 "zoned": false, 00:12:54.180 "supported_io_types": { 00:12:54.180 "read": true, 00:12:54.180 "write": true, 00:12:54.180 "unmap": true, 00:12:54.180 "flush": false, 00:12:54.180 "reset": true, 00:12:54.180 "nvme_admin": false, 00:12:54.180 "nvme_io": false, 00:12:54.180 "nvme_io_md": false, 00:12:54.180 "write_zeroes": true, 00:12:54.180 "zcopy": false, 00:12:54.180 "get_zone_info": false, 00:12:54.180 "zone_management": false, 00:12:54.180 "zone_append": false, 00:12:54.180 "compare": false, 00:12:54.180 "compare_and_write": false, 00:12:54.180 "abort": false, 00:12:54.180 "seek_hole": true, 00:12:54.180 "seek_data": true, 00:12:54.180 "copy": false, 00:12:54.180 "nvme_iov_md": false 00:12:54.180 }, 00:12:54.180 "driver_specific": { 00:12:54.180 "lvol": { 00:12:54.180 "lvol_store_uuid": "000352ab-67ef-4fe7-b21e-8ff55917b7f5", 00:12:54.180 "base_bdev": "aio_bdev", 00:12:54.180 "thin_provision": false, 00:12:54.180 "num_allocated_clusters": 38, 00:12:54.180 "snapshot": false, 00:12:54.180 "clone": false, 00:12:54.180 "esnap_clone": false 00:12:54.180 } 00:12:54.180 } 00:12:54.180 } 00:12:54.180 ] 00:12:54.180 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:54.180 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 000352ab-67ef-4fe7-b21e-8ff55917b7f5 00:12:54.180 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:54.441 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:54.441 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 000352ab-67ef-4fe7-b21e-8ff55917b7f5 00:12:54.441 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:54.705 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:54.705 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:54.705 [2024-11-20 08:09:59.395926] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:54.705 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 000352ab-67ef-4fe7-b21e-8ff55917b7f5 00:12:54.705 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:12:54.705 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 000352ab-67ef-4fe7-b21e-8ff55917b7f5 00:12:54.705 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.705 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.705 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.705 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.705 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.705 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.705 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.705 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:54.705 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 000352ab-67ef-4fe7-b21e-8ff55917b7f5 00:12:54.967 request: 00:12:54.967 { 00:12:54.967 "uuid": "000352ab-67ef-4fe7-b21e-8ff55917b7f5", 00:12:54.967 "method": "bdev_lvol_get_lvstores", 00:12:54.967 "req_id": 1 00:12:54.967 } 00:12:54.967 Got JSON-RPC error response 00:12:54.967 response: 00:12:54.967 { 00:12:54.967 "code": -19, 00:12:54.967 "message": "No such device" 00:12:54.967 } 00:12:54.967 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:12:54.967 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:54.967 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:54.967 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:54.967 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:55.227 aio_bdev 00:12:55.227 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8ef94f81-50c2-412b-ad0f-802abad1b0f8 00:12:55.228 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8ef94f81-50c2-412b-ad0f-802abad1b0f8 00:12:55.228 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.228 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:55.228 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.228 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.228 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:55.228 08:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8ef94f81-50c2-412b-ad0f-802abad1b0f8 -t 2000 00:12:55.488 [ 00:12:55.488 { 00:12:55.488 "name": "8ef94f81-50c2-412b-ad0f-802abad1b0f8", 00:12:55.488 "aliases": [ 00:12:55.488 "lvs/lvol" 00:12:55.488 ], 00:12:55.488 "product_name": "Logical Volume", 00:12:55.488 "block_size": 4096, 00:12:55.488 "num_blocks": 38912, 00:12:55.488 "uuid": "8ef94f81-50c2-412b-ad0f-802abad1b0f8", 00:12:55.488 "assigned_rate_limits": { 00:12:55.488 "rw_ios_per_sec": 0, 00:12:55.488 "rw_mbytes_per_sec": 0, 00:12:55.488 "r_mbytes_per_sec": 0, 00:12:55.488 "w_mbytes_per_sec": 0 00:12:55.488 }, 00:12:55.488 "claimed": false, 00:12:55.488 "zoned": false, 00:12:55.488 "supported_io_types": { 00:12:55.488 "read": true, 00:12:55.488 "write": true, 00:12:55.488 "unmap": true, 00:12:55.488 "flush": false, 00:12:55.488 "reset": true, 00:12:55.488 "nvme_admin": false, 00:12:55.488 "nvme_io": false, 00:12:55.488 "nvme_io_md": false, 00:12:55.488 "write_zeroes": true, 00:12:55.488 "zcopy": false, 00:12:55.488 "get_zone_info": false, 00:12:55.488 "zone_management": false, 00:12:55.488 "zone_append": false, 00:12:55.488 "compare": false, 00:12:55.488 "compare_and_write": false, 00:12:55.488 "abort": false, 00:12:55.488 "seek_hole": true, 00:12:55.488 "seek_data": true, 00:12:55.488 "copy": false, 00:12:55.488 "nvme_iov_md": false 00:12:55.488 }, 00:12:55.488 "driver_specific": { 00:12:55.488 "lvol": { 00:12:55.488 "lvol_store_uuid": "000352ab-67ef-4fe7-b21e-8ff55917b7f5", 00:12:55.488 "base_bdev": "aio_bdev", 00:12:55.488 "thin_provision": false, 00:12:55.488 "num_allocated_clusters": 38, 00:12:55.488 "snapshot": false, 00:12:55.488 "clone": false, 00:12:55.488 "esnap_clone": false 00:12:55.488 } 00:12:55.488 } 00:12:55.488 } 00:12:55.488 ] 00:12:55.488 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:55.488 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 000352ab-67ef-4fe7-b21e-8ff55917b7f5 00:12:55.488 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:55.748 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:55.748 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 000352ab-67ef-4fe7-b21e-8ff55917b7f5 00:12:55.748 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:55.748 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:55.748 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8ef94f81-50c2-412b-ad0f-802abad1b0f8 00:12:56.011 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 000352ab-67ef-4fe7-b21e-8ff55917b7f5 00:12:56.271 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:56.271 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:56.532 00:12:56.532 real 0m17.478s 00:12:56.532 user 0m45.493s 00:12:56.532 sys 0m3.007s 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:56.532 ************************************ 00:12:56.532 END TEST lvs_grow_dirty 00:12:56.532 ************************************ 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:56.532 nvmf_trace.0 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:56.532 rmmod nvme_tcp 00:12:56.532 rmmod nvme_fabrics 00:12:56.532 rmmod nvme_keyring 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 1815834 ']' 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 1815834 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1815834 ']' 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1815834 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1815834 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1815834' 00:12:56.532 killing process with pid 1815834 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1815834 00:12:56.532 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1815834 00:12:56.795 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:56.795 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:12:56.795 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@254 -- # local dev 00:12:56.795 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # remove_target_ns 00:12:56.795 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:56.795 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:56.795 08:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:58.708 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # delete_main_bridge 00:12:58.708 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:58.708 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # return 0 00:12:58.708 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:58.708 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:58.708 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:58.708 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:12:58.708 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:12:58.708 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:58.708 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:12:58.708 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@274 -- # iptr 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-save 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-restore 00:12:58.971 00:12:58.971 real 0m45.644s 00:12:58.971 user 1m7.583s 00:12:58.971 sys 0m11.191s 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:58.971 ************************************ 00:12:58.971 END TEST nvmf_lvs_grow 00:12:58.971 ************************************ 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@24 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:58.971 ************************************ 00:12:58.971 START TEST nvmf_bdev_io_wait 00:12:58.971 ************************************ 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:58.971 * Looking for test storage... 00:12:58.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:12:58.971 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:59.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.240 --rc genhtml_branch_coverage=1 00:12:59.240 --rc genhtml_function_coverage=1 00:12:59.240 --rc genhtml_legend=1 00:12:59.240 --rc geninfo_all_blocks=1 00:12:59.240 --rc geninfo_unexecuted_blocks=1 00:12:59.240 00:12:59.240 ' 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:59.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.240 --rc genhtml_branch_coverage=1 00:12:59.240 --rc genhtml_function_coverage=1 00:12:59.240 --rc genhtml_legend=1 00:12:59.240 --rc geninfo_all_blocks=1 00:12:59.240 --rc geninfo_unexecuted_blocks=1 00:12:59.240 00:12:59.240 ' 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:59.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.240 --rc genhtml_branch_coverage=1 00:12:59.240 --rc genhtml_function_coverage=1 00:12:59.240 --rc genhtml_legend=1 00:12:59.240 --rc geninfo_all_blocks=1 00:12:59.240 --rc geninfo_unexecuted_blocks=1 00:12:59.240 00:12:59.240 ' 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:59.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.240 --rc genhtml_branch_coverage=1 00:12:59.240 --rc genhtml_function_coverage=1 00:12:59.240 --rc genhtml_legend=1 00:12:59.240 --rc geninfo_all_blocks=1 00:12:59.240 --rc geninfo_unexecuted_blocks=1 00:12:59.240 00:12:59.240 ' 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.240 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:59.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:12:59.241 08:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:07.557 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:07.557 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:07.557 Found net devices under 0000:31:00.0: cvl_0_0 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:07.557 Found net devices under 0000:31:00.1: cvl_0_1 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@247 -- # create_target_ns 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:07.557 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:07.558 10.0.0.1 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:07.558 10.0.0.2 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:07.558 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:07.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.664 ms 00:13:07.558 00:13:07.558 --- 10.0.0.1 ping statistics --- 00:13:07.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.558 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:07.558 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:13:07.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:13:07.559 00:13:07.559 --- 10.0.0.2 ping statistics --- 00:13:07.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.559 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=1821302 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 1821302 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1821302 ']' 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.559 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:07.820 [2024-11-20 08:10:12.312514] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:13:07.820 [2024-11-20 08:10:12.312588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.820 [2024-11-20 08:10:12.403529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.820 [2024-11-20 08:10:12.446337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.820 [2024-11-20 08:10:12.446374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.820 [2024-11-20 08:10:12.446383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.820 [2024-11-20 08:10:12.446390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.820 [2024-11-20 08:10:12.446397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.820 [2024-11-20 08:10:12.448148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.820 [2024-11-20 08:10:12.448265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.820 [2024-11-20 08:10:12.448420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.820 [2024-11-20 08:10:12.448421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.761 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.761 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:13:08.761 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:08.761 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:08.761 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:08.761 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.761 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:08.761 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.761 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:08.761 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.761 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:08.761 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.761 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:08.761 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:08.762 [2024-11-20 08:10:13.224722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:08.762 Malloc0 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:08.762 [2024-11-20 08:10:13.283911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1821652 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1821654 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:08.762 { 00:13:08.762 "params": { 00:13:08.762 "name": "Nvme$subsystem", 00:13:08.762 "trtype": "$TEST_TRANSPORT", 00:13:08.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:08.762 "adrfam": "ipv4", 00:13:08.762 "trsvcid": "$NVMF_PORT", 00:13:08.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:08.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:08.762 "hdgst": ${hdgst:-false}, 00:13:08.762 "ddgst": ${ddgst:-false} 00:13:08.762 }, 00:13:08.762 "method": "bdev_nvme_attach_controller" 00:13:08.762 } 00:13:08.762 EOF 00:13:08.762 )") 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1821656 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:08.762 { 00:13:08.762 "params": { 00:13:08.762 "name": "Nvme$subsystem", 00:13:08.762 "trtype": "$TEST_TRANSPORT", 00:13:08.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:08.762 "adrfam": "ipv4", 00:13:08.762 "trsvcid": "$NVMF_PORT", 00:13:08.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:08.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:08.762 "hdgst": ${hdgst:-false}, 00:13:08.762 "ddgst": ${ddgst:-false} 00:13:08.762 }, 00:13:08.762 "method": "bdev_nvme_attach_controller" 00:13:08.762 } 00:13:08.762 EOF 00:13:08.762 )") 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1821659 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:08.762 { 00:13:08.762 "params": { 00:13:08.762 "name": "Nvme$subsystem", 00:13:08.762 "trtype": "$TEST_TRANSPORT", 00:13:08.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:08.762 "adrfam": "ipv4", 00:13:08.762 "trsvcid": "$NVMF_PORT", 00:13:08.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:08.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:08.762 "hdgst": ${hdgst:-false}, 00:13:08.762 "ddgst": ${ddgst:-false} 00:13:08.762 }, 00:13:08.762 "method": "bdev_nvme_attach_controller" 00:13:08.762 } 00:13:08.762 EOF 00:13:08.762 )") 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:08.762 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:08.762 { 00:13:08.762 "params": { 00:13:08.762 "name": "Nvme$subsystem", 00:13:08.762 "trtype": "$TEST_TRANSPORT", 00:13:08.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:08.762 "adrfam": "ipv4", 00:13:08.763 "trsvcid": "$NVMF_PORT", 00:13:08.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:08.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:08.763 "hdgst": ${hdgst:-false}, 00:13:08.763 "ddgst": ${ddgst:-false} 00:13:08.763 }, 00:13:08.763 "method": "bdev_nvme_attach_controller" 00:13:08.763 } 00:13:08.763 EOF 00:13:08.763 )") 00:13:08.763 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:13:08.763 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1821652 00:13:08.763 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:13:08.763 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:13:08.763 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:13:08.763 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:13:08.763 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:13:08.763 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:08.763 "params": { 00:13:08.763 "name": "Nvme1", 00:13:08.763 "trtype": "tcp", 00:13:08.763 "traddr": "10.0.0.2", 00:13:08.763 "adrfam": "ipv4", 00:13:08.763 "trsvcid": "4420", 00:13:08.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:08.763 "hdgst": false, 00:13:08.763 "ddgst": false 00:13:08.763 }, 00:13:08.763 "method": "bdev_nvme_attach_controller" 00:13:08.763 }' 00:13:08.763 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:13:08.763 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:13:08.763 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:08.763 "params": { 00:13:08.763 "name": "Nvme1", 00:13:08.763 "trtype": "tcp", 00:13:08.763 "traddr": "10.0.0.2", 00:13:08.763 "adrfam": "ipv4", 00:13:08.763 "trsvcid": "4420", 00:13:08.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:08.763 "hdgst": false, 00:13:08.763 "ddgst": false 00:13:08.763 }, 00:13:08.763 "method": "bdev_nvme_attach_controller" 00:13:08.763 }' 00:13:08.763 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:13:08.763 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:08.763 "params": { 00:13:08.763 "name": "Nvme1", 00:13:08.763 "trtype": "tcp", 00:13:08.763 "traddr": "10.0.0.2", 00:13:08.763 "adrfam": "ipv4", 00:13:08.763 "trsvcid": "4420", 00:13:08.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:08.763 "hdgst": false, 00:13:08.763 "ddgst": false 00:13:08.763 }, 00:13:08.763 "method": "bdev_nvme_attach_controller" 00:13:08.763 }' 00:13:08.763 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:13:08.763 08:10:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:08.763 "params": { 00:13:08.763 "name": "Nvme1", 00:13:08.763 "trtype": "tcp", 00:13:08.763 "traddr": "10.0.0.2", 00:13:08.763 "adrfam": "ipv4", 00:13:08.763 "trsvcid": "4420", 00:13:08.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:08.763 "hdgst": false, 00:13:08.763 "ddgst": false 00:13:08.763 }, 00:13:08.763 "method": "bdev_nvme_attach_controller" 00:13:08.763 }' 00:13:08.763 [2024-11-20 08:10:13.338319] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:13:08.763 [2024-11-20 08:10:13.338362] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:08.763 [2024-11-20 08:10:13.339883] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:13:08.763 [2024-11-20 08:10:13.339931] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:08.763 [2024-11-20 08:10:13.344808] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:13:08.763 [2024-11-20 08:10:13.344855] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:08.763 [2024-11-20 08:10:13.345071] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:13:08.763 [2024-11-20 08:10:13.345115] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:09.026 [2024-11-20 08:10:13.489469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.026 [2024-11-20 08:10:13.517761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:09.026 [2024-11-20 08:10:13.544661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.026 [2024-11-20 08:10:13.574519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:09.026 [2024-11-20 08:10:13.593948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.026 [2024-11-20 08:10:13.622873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:13:09.026 [2024-11-20 08:10:13.641631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.026 [2024-11-20 08:10:13.669467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:09.026 Running I/O for 1 seconds... 00:13:09.287 Running I/O for 1 seconds... 00:13:09.287 Running I/O for 1 seconds... 00:13:09.287 Running I/O for 1 seconds... 00:13:10.230 17842.00 IOPS, 69.70 MiB/s 00:13:10.230 Latency(us) 00:13:10.230 [2024-11-20T07:10:14.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.230 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:10.230 Nvme1n1 : 1.01 17898.11 69.91 0.00 0.00 7130.07 3522.56 15947.09 00:13:10.230 [2024-11-20T07:10:14.959Z] =================================================================================================================== 00:13:10.230 [2024-11-20T07:10:14.959Z] Total : 17898.11 69.91 0.00 0.00 7130.07 3522.56 15947.09 00:13:10.230 188984.00 IOPS, 738.22 MiB/s 00:13:10.230 Latency(us) 00:13:10.230 [2024-11-20T07:10:14.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.230 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:10.230 Nvme1n1 : 1.00 188610.06 736.76 0.00 0.00 674.52 305.49 1979.73 00:13:10.230 [2024-11-20T07:10:14.959Z] =================================================================================================================== 00:13:10.230 [2024-11-20T07:10:14.959Z] Total : 188610.06 736.76 0.00 0.00 674.52 305.49 1979.73 00:13:10.230 11789.00 IOPS, 46.05 MiB/s 00:13:10.230 Latency(us) 00:13:10.230 [2024-11-20T07:10:14.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.230 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:10.230 Nvme1n1 : 1.01 11837.17 46.24 0.00 0.00 10774.51 5406.72 15291.73 00:13:10.230 [2024-11-20T07:10:14.959Z] =================================================================================================================== 00:13:10.230 [2024-11-20T07:10:14.959Z] Total : 11837.17 46.24 0.00 0.00 10774.51 5406.72 15291.73 00:13:10.230 12493.00 IOPS, 48.80 MiB/s [2024-11-20T07:10:14.959Z] 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1821654 00:13:10.230 00:13:10.230 Latency(us) 00:13:10.230 [2024-11-20T07:10:14.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.230 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:10.230 Nvme1n1 : 1.01 12571.32 49.11 0.00 0.00 10152.02 4150.61 22391.47 00:13:10.230 [2024-11-20T07:10:14.959Z] =================================================================================================================== 00:13:10.230 [2024-11-20T07:10:14.959Z] Total : 12571.32 49.11 0.00 0.00 10152.02 4150.61 22391.47 00:13:10.230 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1821656 00:13:10.230 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1821659 00:13:10.230 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.230 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.230 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:10.230 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.230 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:10.230 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:10.230 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:10.230 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:13:10.230 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:10.230 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:13:10.230 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:10.230 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:10.230 rmmod nvme_tcp 00:13:10.490 rmmod nvme_fabrics 00:13:10.490 rmmod nvme_keyring 00:13:10.490 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:10.490 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:13:10.490 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:13:10.490 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 1821302 ']' 00:13:10.490 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 1821302 00:13:10.490 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1821302 ']' 00:13:10.490 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1821302 00:13:10.490 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:13:10.490 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.490 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1821302 00:13:10.491 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:10.491 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:10.491 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1821302' 00:13:10.491 killing process with pid 1821302 00:13:10.491 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1821302 00:13:10.491 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1821302 00:13:10.491 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:10.491 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:13:10.491 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@254 -- # local dev 00:13:10.491 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:10.491 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:10.491 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:10.491 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # return 0 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@274 -- # iptr 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-restore 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-save 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:13.035 00:13:13.035 real 0m13.746s 00:13:13.035 user 0m18.569s 00:13:13.035 sys 0m7.762s 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:13.035 ************************************ 00:13:13.035 END TEST nvmf_bdev_io_wait 00:13:13.035 ************************************ 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@25 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:13.035 ************************************ 00:13:13.035 START TEST nvmf_queue_depth 00:13:13.035 ************************************ 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:13.035 * Looking for test storage... 00:13:13.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:13.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.035 --rc genhtml_branch_coverage=1 00:13:13.035 --rc genhtml_function_coverage=1 00:13:13.035 --rc genhtml_legend=1 00:13:13.035 --rc geninfo_all_blocks=1 00:13:13.035 --rc geninfo_unexecuted_blocks=1 00:13:13.035 00:13:13.035 ' 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:13.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.035 --rc genhtml_branch_coverage=1 00:13:13.035 --rc genhtml_function_coverage=1 00:13:13.035 --rc genhtml_legend=1 00:13:13.035 --rc geninfo_all_blocks=1 00:13:13.035 --rc geninfo_unexecuted_blocks=1 00:13:13.035 00:13:13.035 ' 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:13.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.035 --rc genhtml_branch_coverage=1 00:13:13.035 --rc genhtml_function_coverage=1 00:13:13.035 --rc genhtml_legend=1 00:13:13.035 --rc geninfo_all_blocks=1 00:13:13.035 --rc geninfo_unexecuted_blocks=1 00:13:13.035 00:13:13.035 ' 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:13.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.035 --rc genhtml_branch_coverage=1 00:13:13.035 --rc genhtml_function_coverage=1 00:13:13.035 --rc genhtml_legend=1 00:13:13.035 --rc geninfo_all_blocks=1 00:13:13.035 --rc geninfo_unexecuted_blocks=1 00:13:13.035 00:13:13.035 ' 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.035 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:13.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:13:13.036 08:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:21.210 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:21.210 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:21.210 Found net devices under 0000:31:00.0: cvl_0_0 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:21.210 Found net devices under 0000:31:00.1: cvl_0_1 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:21.210 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@247 -- # create_target_ns 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:21.211 10.0.0.1 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:21.211 10.0.0.2 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:13:21.211 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:13:21.472 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:21.473 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:21.473 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:21.473 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:21.473 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:21.473 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:21.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.690 ms 00:13:21.473 00:13:21.473 --- 10.0.0.1 ping statistics --- 00:13:21.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.473 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:13:21.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:13:21.473 00:13:21.473 --- 10.0.0.2 ping statistics --- 00:13:21.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.473 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:21.473 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:21.474 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:21.734 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:21.734 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:21.734 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:21.734 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:21.734 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=1826745 00:13:21.734 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 1826745 00:13:21.734 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:21.734 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1826745 ']' 00:13:21.734 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.734 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.734 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.734 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.734 08:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:21.735 [2024-11-20 08:10:26.292970] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:13:21.735 [2024-11-20 08:10:26.293038] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.735 [2024-11-20 08:10:26.405778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.735 [2024-11-20 08:10:26.455410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.735 [2024-11-20 08:10:26.455458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.735 [2024-11-20 08:10:26.455467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.735 [2024-11-20 08:10:26.455474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.735 [2024-11-20 08:10:26.455480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.735 [2024-11-20 08:10:26.456303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.677 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.677 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:22.677 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:22.677 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:22.677 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:22.677 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.677 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:22.678 [2024-11-20 08:10:27.152275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:22.678 Malloc0 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:22.678 [2024-11-20 08:10:27.197351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1827092 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1827092 /var/tmp/bdevperf.sock 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1827092 ']' 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:22.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.678 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:22.678 [2024-11-20 08:10:27.254794] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:13:22.678 [2024-11-20 08:10:27.254856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1827092 ] 00:13:22.678 [2024-11-20 08:10:27.337009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.678 [2024-11-20 08:10:27.378526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.620 08:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.620 08:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:23.620 08:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:23.620 08:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.620 08:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:23.620 NVMe0n1 00:13:23.620 08:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.620 08:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:23.620 Running I/O for 10 seconds... 00:13:25.947 10370.00 IOPS, 40.51 MiB/s [2024-11-20T07:10:31.617Z] 11121.00 IOPS, 43.44 MiB/s [2024-11-20T07:10:32.557Z] 11267.33 IOPS, 44.01 MiB/s [2024-11-20T07:10:33.497Z] 11401.00 IOPS, 44.54 MiB/s [2024-11-20T07:10:34.437Z] 11469.60 IOPS, 44.80 MiB/s [2024-11-20T07:10:35.377Z] 11525.33 IOPS, 45.02 MiB/s [2024-11-20T07:10:36.316Z] 11557.71 IOPS, 45.15 MiB/s [2024-11-20T07:10:37.698Z] 11618.88 IOPS, 45.39 MiB/s [2024-11-20T07:10:38.638Z] 11622.78 IOPS, 45.40 MiB/s [2024-11-20T07:10:38.638Z] 11670.60 IOPS, 45.59 MiB/s 00:13:33.909 Latency(us) 00:13:33.909 [2024-11-20T07:10:38.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.909 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:33.909 Verification LBA range: start 0x0 length 0x4000 00:13:33.909 NVMe0n1 : 10.12 11635.61 45.45 0.00 0.00 87346.56 24685.23 85633.71 00:13:33.909 [2024-11-20T07:10:38.638Z] =================================================================================================================== 00:13:33.909 [2024-11-20T07:10:38.638Z] Total : 11635.61 45.45 0.00 0.00 87346.56 24685.23 85633.71 00:13:33.909 { 00:13:33.909 "results": [ 00:13:33.909 { 00:13:33.909 "job": "NVMe0n1", 00:13:33.909 "core_mask": "0x1", 00:13:33.909 "workload": "verify", 00:13:33.909 "status": "finished", 00:13:33.909 "verify_range": { 00:13:33.909 "start": 0, 00:13:33.909 "length": 16384 00:13:33.909 }, 00:13:33.909 "queue_depth": 1024, 00:13:33.909 "io_size": 4096, 00:13:33.909 "runtime": 10.115327, 00:13:33.909 "iops": 11635.610000546696, 00:13:33.909 "mibps": 45.45160156463553, 00:13:33.909 "io_failed": 0, 00:13:33.909 "io_timeout": 0, 00:13:33.909 "avg_latency_us": 87346.5602428815, 00:13:33.909 "min_latency_us": 24685.226666666666, 00:13:33.909 "max_latency_us": 85633.70666666667 00:13:33.909 } 00:13:33.909 ], 00:13:33.909 "core_count": 1 00:13:33.909 } 00:13:33.909 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1827092 00:13:33.909 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1827092 ']' 00:13:33.909 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1827092 00:13:33.909 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:33.909 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.909 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1827092 00:13:33.909 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.909 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.910 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1827092' 00:13:33.910 killing process with pid 1827092 00:13:33.910 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1827092 00:13:33.910 Received shutdown signal, test time was about 10.000000 seconds 00:13:33.910 00:13:33.910 Latency(us) 00:13:33.910 [2024-11-20T07:10:38.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.910 [2024-11-20T07:10:38.639Z] =================================================================================================================== 00:13:33.910 [2024-11-20T07:10:38.639Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:33.910 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1827092 00:13:33.910 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:33.910 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:33.910 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:33.910 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:13:33.910 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:33.910 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:13:33.910 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:33.910 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:33.910 rmmod nvme_tcp 00:13:34.170 rmmod nvme_fabrics 00:13:34.170 rmmod nvme_keyring 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 1826745 ']' 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 1826745 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1826745 ']' 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1826745 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1826745 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1826745' 00:13:34.170 killing process with pid 1826745 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1826745 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1826745 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@254 -- # local dev 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:34.170 08:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # return 0 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@274 -- # iptr 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-save 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-restore 00:13:36.714 00:13:36.714 real 0m23.595s 00:13:36.714 user 0m26.138s 00:13:36.714 sys 0m7.800s 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:36.714 ************************************ 00:13:36.714 END TEST nvmf_queue_depth 00:13:36.714 ************************************ 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.714 08:10:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:36.714 ************************************ 00:13:36.714 START TEST nvmf_nmic 00:13:36.714 ************************************ 00:13:36.714 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:36.714 * Looking for test storage... 00:13:36.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.714 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:36.714 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:13:36.714 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:36.714 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:36.714 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.714 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.714 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.714 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.714 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.714 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:36.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.715 --rc genhtml_branch_coverage=1 00:13:36.715 --rc genhtml_function_coverage=1 00:13:36.715 --rc genhtml_legend=1 00:13:36.715 --rc geninfo_all_blocks=1 00:13:36.715 --rc geninfo_unexecuted_blocks=1 00:13:36.715 00:13:36.715 ' 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:36.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.715 --rc genhtml_branch_coverage=1 00:13:36.715 --rc genhtml_function_coverage=1 00:13:36.715 --rc genhtml_legend=1 00:13:36.715 --rc geninfo_all_blocks=1 00:13:36.715 --rc geninfo_unexecuted_blocks=1 00:13:36.715 00:13:36.715 ' 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:36.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.715 --rc genhtml_branch_coverage=1 00:13:36.715 --rc genhtml_function_coverage=1 00:13:36.715 --rc genhtml_legend=1 00:13:36.715 --rc geninfo_all_blocks=1 00:13:36.715 --rc geninfo_unexecuted_blocks=1 00:13:36.715 00:13:36.715 ' 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:36.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.715 --rc genhtml_branch_coverage=1 00:13:36.715 --rc genhtml_function_coverage=1 00:13:36.715 --rc genhtml_legend=1 00:13:36.715 --rc geninfo_all_blocks=1 00:13:36.715 --rc geninfo_unexecuted_blocks=1 00:13:36.715 00:13:36.715 ' 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:36.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:36.715 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:13:36.716 08:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:44.854 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:44.854 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:44.854 Found net devices under 0000:31:00.0: cvl_0_0 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:44.854 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:44.855 Found net devices under 0000:31:00.1: cvl_0_1 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@247 -- # create_target_ns 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:44.855 10.0.0.1 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:13:44.855 10.0.0.2 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:44.855 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:45.118 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:45.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.698 ms 00:13:45.119 00:13:45.119 --- 10.0.0.1 ping statistics --- 00:13:45.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.119 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:13:45.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:13:45.119 00:13:45.119 --- 10.0.0.2 ping statistics --- 00:13:45.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.119 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:45.119 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:45.120 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:45.120 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:45.120 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:45.120 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=1834158 00:13:45.120 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 1834158 00:13:45.120 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:45.120 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1834158 ']' 00:13:45.120 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.120 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.120 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.120 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.120 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:45.381 [2024-11-20 08:10:49.878300] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:13:45.382 [2024-11-20 08:10:49.878366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.382 [2024-11-20 08:10:49.972546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.382 [2024-11-20 08:10:50.015947] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.382 [2024-11-20 08:10:50.015985] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.382 [2024-11-20 08:10:50.015994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.382 [2024-11-20 08:10:50.016001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.382 [2024-11-20 08:10:50.016007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.382 [2024-11-20 08:10:50.017595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.382 [2024-11-20 08:10:50.017713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.382 [2024-11-20 08:10:50.017896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.382 [2024-11-20 08:10:50.017916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.482 [2024-11-20 08:10:50.738388] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.482 Malloc0 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.482 [2024-11-20 08:10:50.810182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:46.482 test case1: single bdev can't be used in multiple subsystems 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.482 [2024-11-20 08:10:50.846058] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:46.482 [2024-11-20 08:10:50.846077] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:46.482 [2024-11-20 08:10:50.846086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.482 request: 00:13:46.482 { 00:13:46.482 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:46.482 "namespace": { 00:13:46.482 "bdev_name": "Malloc0", 00:13:46.482 "no_auto_visible": false 00:13:46.482 }, 00:13:46.482 "method": "nvmf_subsystem_add_ns", 00:13:46.482 "req_id": 1 00:13:46.482 } 00:13:46.482 Got JSON-RPC error response 00:13:46.482 response: 00:13:46.482 { 00:13:46.482 "code": -32602, 00:13:46.482 "message": "Invalid parameters" 00:13:46.482 } 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:46.482 Adding namespace failed - expected result. 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:46.482 test case2: host connect to nvmf target in multiple paths 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.482 [2024-11-20 08:10:50.858215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.482 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:47.887 08:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:49.268 08:10:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:49.269 08:10:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:13:49.269 08:10:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.269 08:10:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:49.269 08:10:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:13:51.811 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:51.811 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:51.811 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:51.811 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:51.811 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.811 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:13:51.811 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:51.811 [global] 00:13:51.811 thread=1 00:13:51.811 invalidate=1 00:13:51.811 rw=write 00:13:51.811 time_based=1 00:13:51.811 runtime=1 00:13:51.811 ioengine=libaio 00:13:51.811 direct=1 00:13:51.811 bs=4096 00:13:51.811 iodepth=1 00:13:51.811 norandommap=0 00:13:51.811 numjobs=1 00:13:51.811 00:13:51.811 verify_dump=1 00:13:51.811 verify_backlog=512 00:13:51.811 verify_state_save=0 00:13:51.811 do_verify=1 00:13:51.811 verify=crc32c-intel 00:13:51.811 [job0] 00:13:51.811 filename=/dev/nvme0n1 00:13:51.811 Could not set queue depth (nvme0n1) 00:13:51.811 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:51.811 fio-3.35 00:13:51.811 Starting 1 thread 00:13:53.194 00:13:53.194 job0: (groupid=0, jobs=1): err= 0: pid=1835615: Wed Nov 20 08:10:57 2024 00:13:53.194 read: IOPS=678, BW=2713KiB/s (2778kB/s)(2716KiB/1001msec) 00:13:53.194 slat (nsec): min=6254, max=60734, avg=22872.91, stdev=8207.76 00:13:53.194 clat (usec): min=384, max=977, avg=686.93, stdev=88.54 00:13:53.194 lat (usec): min=410, max=1003, avg=709.80, stdev=92.13 00:13:53.194 clat percentiles (usec): 00:13:53.194 | 1.00th=[ 453], 5.00th=[ 537], 10.00th=[ 562], 20.00th=[ 619], 00:13:53.194 | 30.00th=[ 644], 40.00th=[ 668], 50.00th=[ 701], 60.00th=[ 725], 00:13:53.194 | 70.00th=[ 750], 80.00th=[ 766], 90.00th=[ 783], 95.00th=[ 799], 00:13:53.194 | 99.00th=[ 840], 99.50th=[ 848], 99.90th=[ 979], 99.95th=[ 979], 00:13:53.194 | 99.99th=[ 979] 00:13:53.194 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:53.194 slat (nsec): min=8911, max=65906, avg=29955.86, stdev=9727.78 00:13:53.194 clat (usec): min=114, max=771, avg=464.28, stdev=99.29 00:13:53.194 lat (usec): min=124, max=821, avg=494.23, stdev=103.74 00:13:53.194 clat percentiles (usec): 00:13:53.194 | 1.00th=[ 229], 5.00th=[ 277], 10.00th=[ 338], 20.00th=[ 383], 00:13:53.194 | 30.00th=[ 416], 40.00th=[ 457], 50.00th=[ 474], 60.00th=[ 490], 00:13:53.194 | 70.00th=[ 506], 80.00th=[ 553], 90.00th=[ 594], 95.00th=[ 619], 00:13:53.194 | 99.00th=[ 685], 99.50th=[ 693], 99.90th=[ 734], 99.95th=[ 775], 00:13:53.194 | 99.99th=[ 775] 00:13:53.194 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:53.194 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:53.194 lat (usec) : 250=1.29%, 500=40.46%, 750=46.21%, 1000=12.04% 00:13:53.194 cpu : usr=3.30%, sys=6.10%, ctx=1703, majf=0, minf=1 00:13:53.194 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:53.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.194 issued rwts: total=679,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.194 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:53.194 00:13:53.194 Run status group 0 (all jobs): 00:13:53.194 READ: bw=2713KiB/s (2778kB/s), 2713KiB/s-2713KiB/s (2778kB/s-2778kB/s), io=2716KiB (2781kB), run=1001-1001msec 00:13:53.194 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:13:53.194 00:13:53.194 Disk stats (read/write): 00:13:53.194 nvme0n1: ios=585/1024, merge=0/0, ticks=403/403, in_queue=806, util=93.89% 00:13:53.194 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:53.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:53.194 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:53.194 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:13:53.194 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:53.194 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:53.195 rmmod nvme_tcp 00:13:53.195 rmmod nvme_fabrics 00:13:53.195 rmmod nvme_keyring 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 1834158 ']' 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 1834158 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1834158 ']' 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1834158 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1834158 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1834158' 00:13:53.195 killing process with pid 1834158 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1834158 00:13:53.195 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1834158 00:13:53.456 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:53.456 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:13:53.456 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@254 -- # local dev 00:13:53.456 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:53.456 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:53.456 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:53.456 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # return 0 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:13:55.366 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:13:55.367 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:55.367 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:13:55.367 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:13:55.367 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:55.367 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:13:55.367 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:13:55.367 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@274 -- # iptr 00:13:55.367 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-save 00:13:55.367 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:55.367 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-restore 00:13:55.367 00:13:55.367 real 0m19.028s 00:13:55.367 user 0m45.609s 00:13:55.367 sys 0m7.328s 00:13:55.367 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.367 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:55.367 ************************************ 00:13:55.367 END TEST nvmf_nmic 00:13:55.367 ************************************ 00:13:55.628 08:11:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:55.628 08:11:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:55.628 08:11:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.628 08:11:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:55.628 ************************************ 00:13:55.628 START TEST nvmf_fio_target 00:13:55.628 ************************************ 00:13:55.628 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:55.628 * Looking for test storage... 00:13:55.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.628 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:55.628 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:13:55.628 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:55.628 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:55.628 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:55.628 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:55.628 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:55.628 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:55.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.629 --rc genhtml_branch_coverage=1 00:13:55.629 --rc genhtml_function_coverage=1 00:13:55.629 --rc genhtml_legend=1 00:13:55.629 --rc geninfo_all_blocks=1 00:13:55.629 --rc geninfo_unexecuted_blocks=1 00:13:55.629 00:13:55.629 ' 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:55.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.629 --rc genhtml_branch_coverage=1 00:13:55.629 --rc genhtml_function_coverage=1 00:13:55.629 --rc genhtml_legend=1 00:13:55.629 --rc geninfo_all_blocks=1 00:13:55.629 --rc geninfo_unexecuted_blocks=1 00:13:55.629 00:13:55.629 ' 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:55.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.629 --rc genhtml_branch_coverage=1 00:13:55.629 --rc genhtml_function_coverage=1 00:13:55.629 --rc genhtml_legend=1 00:13:55.629 --rc geninfo_all_blocks=1 00:13:55.629 --rc geninfo_unexecuted_blocks=1 00:13:55.629 00:13:55.629 ' 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:55.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.629 --rc genhtml_branch_coverage=1 00:13:55.629 --rc genhtml_function_coverage=1 00:13:55.629 --rc genhtml_legend=1 00:13:55.629 --rc geninfo_all_blocks=1 00:13:55.629 --rc geninfo_unexecuted_blocks=1 00:13:55.629 00:13:55.629 ' 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:55.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.629 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:55.630 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:55.630 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:13:55.630 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:55.630 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:55.630 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:55.630 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:55.630 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:55.630 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:13:55.630 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:03.768 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:03.768 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:03.768 Found net devices under 0000:31:00.0: cvl_0_0 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:03.768 Found net devices under 0000:31:00.1: cvl_0_1 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@247 -- # create_target_ns 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:14:03.768 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:03.769 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:03.769 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:03.769 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:03.769 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:03.769 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:14:03.769 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:14:03.769 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:14:03.769 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:14:03.769 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:14:03.769 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:03.769 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:14:03.769 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:03.769 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:04.029 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:14:04.029 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:04.029 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:04.030 10.0.0.1 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:04.030 10.0.0.2 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:04.030 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:04.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.610 ms 00:14:04.293 00:14:04.293 --- 10.0.0.1 ping statistics --- 00:14:04.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.293 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:04.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:14:04.293 00:14:04.293 --- 10.0.0.2 ping statistics --- 00:14:04.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.293 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:04.293 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=1840759 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 1840759 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1840759 ']' 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.294 08:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.294 [2024-11-20 08:11:08.973262] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:14:04.294 [2024-11-20 08:11:08.973327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.556 [2024-11-20 08:11:09.064155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:04.556 [2024-11-20 08:11:09.105716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.556 [2024-11-20 08:11:09.105750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.556 [2024-11-20 08:11:09.105758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.556 [2024-11-20 08:11:09.105770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.556 [2024-11-20 08:11:09.105776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.556 [2024-11-20 08:11:09.107645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.556 [2024-11-20 08:11:09.107762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.556 [2024-11-20 08:11:09.107919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.556 [2024-11-20 08:11:09.107919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.127 08:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:05.127 08:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:14:05.127 08:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:05.127 08:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:05.127 08:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.127 08:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.127 08:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:05.387 [2024-11-20 08:11:09.979193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.387 08:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:05.647 08:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:05.647 08:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:05.909 08:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:05.909 08:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:05.909 08:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:05.909 08:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.169 08:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:06.169 08:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:06.429 08:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.690 08:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:06.690 08:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.690 08:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:06.690 08:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.950 08:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:06.950 08:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:07.212 08:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:07.473 08:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:07.473 08:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:07.473 08:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:07.473 08:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:07.733 08:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.994 [2024-11-20 08:11:12.464808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.994 08:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:07.994 08:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:08.254 08:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:09.639 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:09.640 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:14:09.640 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.640 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:14:09.640 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:14:09.640 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:14:12.183 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:12.183 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:12.183 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:12.183 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:14:12.183 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:12.183 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:14:12.183 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:12.183 [global] 00:14:12.183 thread=1 00:14:12.183 invalidate=1 00:14:12.183 rw=write 00:14:12.183 time_based=1 00:14:12.183 runtime=1 00:14:12.183 ioengine=libaio 00:14:12.183 direct=1 00:14:12.183 bs=4096 00:14:12.183 iodepth=1 00:14:12.183 norandommap=0 00:14:12.183 numjobs=1 00:14:12.183 00:14:12.183 verify_dump=1 00:14:12.183 verify_backlog=512 00:14:12.183 verify_state_save=0 00:14:12.183 do_verify=1 00:14:12.183 verify=crc32c-intel 00:14:12.183 [job0] 00:14:12.183 filename=/dev/nvme0n1 00:14:12.183 [job1] 00:14:12.183 filename=/dev/nvme0n2 00:14:12.183 [job2] 00:14:12.183 filename=/dev/nvme0n3 00:14:12.183 [job3] 00:14:12.183 filename=/dev/nvme0n4 00:14:12.183 Could not set queue depth (nvme0n1) 00:14:12.183 Could not set queue depth (nvme0n2) 00:14:12.183 Could not set queue depth (nvme0n3) 00:14:12.183 Could not set queue depth (nvme0n4) 00:14:12.183 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.183 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.183 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.183 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.183 fio-3.35 00:14:12.183 Starting 4 threads 00:14:13.567 00:14:13.567 job0: (groupid=0, jobs=1): err= 0: pid=1842387: Wed Nov 20 08:11:18 2024 00:14:13.567 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:13.567 slat (nsec): min=6796, max=63418, avg=27439.70, stdev=3169.89 00:14:13.567 clat (usec): min=551, max=1280, avg=986.27, stdev=100.83 00:14:13.567 lat (usec): min=578, max=1307, avg=1013.71, stdev=100.95 00:14:13.567 clat percentiles (usec): 00:14:13.567 | 1.00th=[ 742], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[ 906], 00:14:13.567 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[ 988], 60.00th=[ 1012], 00:14:13.567 | 70.00th=[ 1037], 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1156], 00:14:13.567 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1287], 99.95th=[ 1287], 00:14:13.567 | 99.99th=[ 1287] 00:14:13.567 write: IOPS=761, BW=3045KiB/s (3118kB/s)(3048KiB/1001msec); 0 zone resets 00:14:13.567 slat (nsec): min=9492, max=56157, avg=32874.79, stdev=8474.44 00:14:13.567 clat (usec): min=189, max=1608, avg=583.34, stdev=140.07 00:14:13.567 lat (usec): min=201, max=1621, avg=616.21, stdev=141.78 00:14:13.567 clat percentiles (usec): 00:14:13.567 | 1.00th=[ 293], 5.00th=[ 359], 10.00th=[ 400], 20.00th=[ 465], 00:14:13.567 | 30.00th=[ 515], 40.00th=[ 545], 50.00th=[ 586], 60.00th=[ 611], 00:14:13.567 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 766], 95.00th=[ 807], 00:14:13.567 | 99.00th=[ 889], 99.50th=[ 930], 99.90th=[ 1614], 99.95th=[ 1614], 00:14:13.567 | 99.99th=[ 1614] 00:14:13.567 bw ( KiB/s): min= 4096, max= 4096, per=33.12%, avg=4096.00, stdev= 0.00, samples=1 00:14:13.567 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:13.567 lat (usec) : 250=0.24%, 500=15.38%, 750=37.60%, 1000=28.57% 00:14:13.567 lat (msec) : 2=18.21% 00:14:13.567 cpu : usr=3.30%, sys=4.60%, ctx=1275, majf=0, minf=1 00:14:13.567 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.567 issued rwts: total=512,762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.567 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.567 job1: (groupid=0, jobs=1): err= 0: pid=1842393: Wed Nov 20 08:11:18 2024 00:14:13.567 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:13.567 slat (nsec): min=7477, max=48303, avg=28260.19, stdev=2887.02 00:14:13.567 clat (usec): min=537, max=1221, avg=980.54, stdev=91.27 00:14:13.567 lat (usec): min=566, max=1249, avg=1008.80, stdev=91.07 00:14:13.567 clat percentiles (usec): 00:14:13.567 | 1.00th=[ 717], 5.00th=[ 807], 10.00th=[ 848], 20.00th=[ 914], 00:14:13.567 | 30.00th=[ 947], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:14:13.567 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1106], 00:14:13.567 | 99.00th=[ 1156], 99.50th=[ 1188], 99.90th=[ 1221], 99.95th=[ 1221], 00:14:13.567 | 99.99th=[ 1221] 00:14:13.567 write: IOPS=783, BW=3133KiB/s (3208kB/s)(3136KiB/1001msec); 0 zone resets 00:14:13.567 slat (nsec): min=9367, max=72889, avg=32531.13, stdev=10246.09 00:14:13.567 clat (usec): min=116, max=1446, avg=569.89, stdev=153.68 00:14:13.567 lat (usec): min=126, max=1483, avg=602.42, stdev=156.40 00:14:13.567 clat percentiles (usec): 00:14:13.567 | 1.00th=[ 206], 5.00th=[ 281], 10.00th=[ 359], 20.00th=[ 445], 00:14:13.567 | 30.00th=[ 502], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 619], 00:14:13.567 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 775], 00:14:13.567 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 1450], 99.95th=[ 1450], 00:14:13.567 | 99.99th=[ 1450] 00:14:13.567 bw ( KiB/s): min= 4096, max= 4096, per=33.12%, avg=4096.00, stdev= 0.00, samples=1 00:14:13.567 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:13.567 lat (usec) : 250=2.01%, 500=16.13%, 750=38.12%, 1000=24.61% 00:14:13.567 lat (msec) : 2=19.14% 00:14:13.567 cpu : usr=2.50%, sys=5.50%, ctx=1298, majf=0, minf=1 00:14:13.567 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.567 issued rwts: total=512,784,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.567 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.567 job2: (groupid=0, jobs=1): err= 0: pid=1842406: Wed Nov 20 08:11:18 2024 00:14:13.567 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:13.567 slat (nsec): min=8019, max=69710, avg=29696.59, stdev=5572.23 00:14:13.567 clat (usec): min=525, max=41197, avg=1021.57, stdev=1783.27 00:14:13.567 lat (usec): min=540, max=41225, avg=1051.27, stdev=1783.25 00:14:13.567 clat percentiles (usec): 00:14:13.567 | 1.00th=[ 611], 5.00th=[ 725], 10.00th=[ 766], 20.00th=[ 840], 00:14:13.567 | 30.00th=[ 898], 40.00th=[ 938], 50.00th=[ 963], 60.00th=[ 988], 00:14:13.567 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1090], 95.00th=[ 1123], 00:14:13.567 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[41157], 99.95th=[41157], 00:14:13.567 | 99.99th=[41157] 00:14:13.567 write: IOPS=784, BW=3137KiB/s (3212kB/s)(3140KiB/1001msec); 0 zone resets 00:14:13.567 slat (nsec): min=9668, max=66127, avg=29301.72, stdev=12265.84 00:14:13.567 clat (usec): min=130, max=2157, avg=544.54, stdev=168.40 00:14:13.567 lat (usec): min=140, max=2194, avg=573.85, stdev=173.37 00:14:13.567 clat percentiles (usec): 00:14:13.567 | 1.00th=[ 231], 5.00th=[ 285], 10.00th=[ 326], 20.00th=[ 388], 00:14:13.567 | 30.00th=[ 461], 40.00th=[ 502], 50.00th=[ 553], 60.00th=[ 594], 00:14:13.567 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 742], 95.00th=[ 783], 00:14:13.567 | 99.00th=[ 889], 99.50th=[ 947], 99.90th=[ 2147], 99.95th=[ 2147], 00:14:13.567 | 99.99th=[ 2147] 00:14:13.567 bw ( KiB/s): min= 4096, max= 4096, per=33.12%, avg=4096.00, stdev= 0.00, samples=1 00:14:13.567 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:13.567 lat (usec) : 250=1.16%, 500=22.59%, 750=35.62%, 1000=26.60% 00:14:13.567 lat (msec) : 2=13.88%, 4=0.08%, 50=0.08% 00:14:13.567 cpu : usr=2.10%, sys=5.60%, ctx=1299, majf=0, minf=1 00:14:13.567 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.567 issued rwts: total=512,785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.567 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.567 job3: (groupid=0, jobs=1): err= 0: pid=1842413: Wed Nov 20 08:11:18 2024 00:14:13.567 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:13.567 slat (nsec): min=7845, max=46497, avg=28138.62, stdev=2349.02 00:14:13.567 clat (usec): min=637, max=1250, avg=982.02, stdev=97.41 00:14:13.568 lat (usec): min=666, max=1278, avg=1010.16, stdev=97.57 00:14:13.568 clat percentiles (usec): 00:14:13.568 | 1.00th=[ 717], 5.00th=[ 807], 10.00th=[ 848], 20.00th=[ 906], 00:14:13.568 | 30.00th=[ 938], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1012], 00:14:13.568 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1139], 00:14:13.568 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1254], 99.95th=[ 1254], 00:14:13.568 | 99.99th=[ 1254] 00:14:13.568 write: IOPS=763, BW=3053KiB/s (3126kB/s)(3056KiB/1001msec); 0 zone resets 00:14:13.568 slat (nsec): min=9307, max=56605, avg=32663.29, stdev=9575.80 00:14:13.568 clat (usec): min=159, max=2168, avg=585.97, stdev=159.34 00:14:13.568 lat (usec): min=195, max=2204, avg=618.63, stdev=162.42 00:14:13.568 clat percentiles (usec): 00:14:13.568 | 1.00th=[ 235], 5.00th=[ 318], 10.00th=[ 383], 20.00th=[ 461], 00:14:13.568 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 594], 60.00th=[ 627], 00:14:13.568 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 758], 95.00th=[ 807], 00:14:13.568 | 99.00th=[ 914], 99.50th=[ 971], 99.90th=[ 2180], 99.95th=[ 2180], 00:14:13.568 | 99.99th=[ 2180] 00:14:13.568 bw ( KiB/s): min= 4096, max= 4096, per=33.12%, avg=4096.00, stdev= 0.00, samples=1 00:14:13.568 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:13.568 lat (usec) : 250=0.86%, 500=14.81%, 750=38.32%, 1000=26.80% 00:14:13.568 lat (msec) : 2=19.12%, 4=0.08% 00:14:13.568 cpu : usr=2.40%, sys=5.50%, ctx=1278, majf=0, minf=1 00:14:13.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.568 issued rwts: total=512,764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.568 00:14:13.568 Run status group 0 (all jobs): 00:14:13.568 READ: bw=8184KiB/s (8380kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:14:13.568 WRITE: bw=12.1MiB/s (12.7MB/s), 3045KiB/s-3137KiB/s (3118kB/s-3212kB/s), io=12.1MiB (12.7MB), run=1001-1001msec 00:14:13.568 00:14:13.568 Disk stats (read/write): 00:14:13.568 nvme0n1: ios=519/512, merge=0/0, ticks=1422/224, in_queue=1646, util=95.99% 00:14:13.568 nvme0n2: ios=535/523, merge=0/0, ticks=1408/215, in_queue=1623, util=96.53% 00:14:13.568 nvme0n3: ios=528/512, merge=0/0, ticks=1433/243, in_queue=1676, util=96.40% 00:14:13.568 nvme0n4: ios=561/512, merge=0/0, ticks=954/216, in_queue=1170, util=96.67% 00:14:13.568 08:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:13.568 [global] 00:14:13.568 thread=1 00:14:13.568 invalidate=1 00:14:13.568 rw=randwrite 00:14:13.568 time_based=1 00:14:13.568 runtime=1 00:14:13.568 ioengine=libaio 00:14:13.568 direct=1 00:14:13.568 bs=4096 00:14:13.568 iodepth=1 00:14:13.568 norandommap=0 00:14:13.568 numjobs=1 00:14:13.568 00:14:13.568 verify_dump=1 00:14:13.568 verify_backlog=512 00:14:13.568 verify_state_save=0 00:14:13.568 do_verify=1 00:14:13.568 verify=crc32c-intel 00:14:13.568 [job0] 00:14:13.568 filename=/dev/nvme0n1 00:14:13.568 [job1] 00:14:13.568 filename=/dev/nvme0n2 00:14:13.568 [job2] 00:14:13.568 filename=/dev/nvme0n3 00:14:13.568 [job3] 00:14:13.568 filename=/dev/nvme0n4 00:14:13.568 Could not set queue depth (nvme0n1) 00:14:13.568 Could not set queue depth (nvme0n2) 00:14:13.568 Could not set queue depth (nvme0n3) 00:14:13.568 Could not set queue depth (nvme0n4) 00:14:13.829 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.829 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.829 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.829 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.829 fio-3.35 00:14:13.829 Starting 4 threads 00:14:15.212 00:14:15.212 job0: (groupid=0, jobs=1): err= 0: pid=1842916: Wed Nov 20 08:11:19 2024 00:14:15.212 read: IOPS=18, BW=75.8KiB/s (77.6kB/s)(76.0KiB/1003msec) 00:14:15.212 slat (nsec): min=25944, max=26515, avg=26231.05, stdev=144.94 00:14:15.212 clat (usec): min=40858, max=41035, avg=40959.72, stdev=45.35 00:14:15.212 lat (usec): min=40884, max=41061, avg=40985.95, stdev=45.30 00:14:15.212 clat percentiles (usec): 00:14:15.212 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:14:15.212 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:15.212 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:15.212 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:15.212 | 99.99th=[41157] 00:14:15.212 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:14:15.212 slat (nsec): min=9285, max=90827, avg=27417.30, stdev=10626.32 00:14:15.212 clat (usec): min=147, max=788, avg=402.97, stdev=98.94 00:14:15.212 lat (usec): min=157, max=820, avg=430.38, stdev=102.11 00:14:15.212 clat percentiles (usec): 00:14:15.212 | 1.00th=[ 204], 5.00th=[ 251], 10.00th=[ 273], 20.00th=[ 318], 00:14:15.212 | 30.00th=[ 359], 40.00th=[ 379], 50.00th=[ 396], 60.00th=[ 412], 00:14:15.212 | 70.00th=[ 445], 80.00th=[ 486], 90.00th=[ 537], 95.00th=[ 586], 00:14:15.212 | 99.00th=[ 627], 99.50th=[ 668], 99.90th=[ 791], 99.95th=[ 791], 00:14:15.212 | 99.99th=[ 791] 00:14:15.212 bw ( KiB/s): min= 4096, max= 4096, per=47.89%, avg=4096.00, stdev= 0.00, samples=1 00:14:15.212 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:15.212 lat (usec) : 250=4.52%, 500=75.89%, 750=15.82%, 1000=0.19% 00:14:15.212 lat (msec) : 50=3.58% 00:14:15.212 cpu : usr=0.70%, sys=1.40%, ctx=532, majf=0, minf=1 00:14:15.212 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:15.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.212 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.212 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:15.212 job1: (groupid=0, jobs=1): err= 0: pid=1842922: Wed Nov 20 08:11:19 2024 00:14:15.212 read: IOPS=19, BW=77.4KiB/s (79.2kB/s)(80.0KiB/1034msec) 00:14:15.212 slat (nsec): min=25936, max=26554, avg=26174.55, stdev=177.00 00:14:15.212 clat (usec): min=40864, max=42446, avg=41887.69, stdev=340.11 00:14:15.212 lat (usec): min=40890, max=42472, avg=41913.87, stdev=340.13 00:14:15.212 clat percentiles (usec): 00:14:15.212 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41681], 00:14:15.212 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:14:15.212 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:15.212 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:15.212 | 99.99th=[42206] 00:14:15.212 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:14:15.212 slat (nsec): min=9321, max=65943, avg=25144.10, stdev=10796.54 00:14:15.212 clat (usec): min=127, max=612, avg=349.83, stdev=78.98 00:14:15.212 lat (usec): min=146, max=645, avg=374.97, stdev=82.51 00:14:15.212 clat percentiles (usec): 00:14:15.212 | 1.00th=[ 145], 5.00th=[ 219], 10.00th=[ 251], 20.00th=[ 273], 00:14:15.212 | 30.00th=[ 302], 40.00th=[ 338], 50.00th=[ 363], 60.00th=[ 379], 00:14:15.212 | 70.00th=[ 392], 80.00th=[ 412], 90.00th=[ 449], 95.00th=[ 474], 00:14:15.213 | 99.00th=[ 519], 99.50th=[ 562], 99.90th=[ 611], 99.95th=[ 611], 00:14:15.213 | 99.99th=[ 611] 00:14:15.213 bw ( KiB/s): min= 4096, max= 4096, per=47.89%, avg=4096.00, stdev= 0.00, samples=1 00:14:15.213 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:15.213 lat (usec) : 250=9.40%, 500=84.59%, 750=2.26% 00:14:15.213 lat (msec) : 50=3.76% 00:14:15.213 cpu : usr=0.58%, sys=1.36%, ctx=532, majf=0, minf=2 00:14:15.213 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:15.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.213 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.213 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:15.213 job2: (groupid=0, jobs=1): err= 0: pid=1842933: Wed Nov 20 08:11:19 2024 00:14:15.213 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:15.213 slat (nsec): min=25639, max=61188, avg=26767.38, stdev=2994.79 00:14:15.213 clat (usec): min=644, max=1281, avg=1059.52, stdev=78.29 00:14:15.213 lat (usec): min=671, max=1307, avg=1086.29, stdev=78.43 00:14:15.213 clat percentiles (usec): 00:14:15.213 | 1.00th=[ 816], 5.00th=[ 922], 10.00th=[ 963], 20.00th=[ 1012], 00:14:15.213 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1057], 60.00th=[ 1074], 00:14:15.213 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1172], 00:14:15.213 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1287], 99.95th=[ 1287], 00:14:15.213 | 99.99th=[ 1287] 00:14:15.213 write: IOPS=676, BW=2705KiB/s (2770kB/s)(2708KiB/1001msec); 0 zone resets 00:14:15.213 slat (nsec): min=9768, max=73291, avg=29876.86, stdev=9860.50 00:14:15.213 clat (usec): min=214, max=1028, avg=611.53, stdev=122.80 00:14:15.213 lat (usec): min=224, max=1061, avg=641.40, stdev=127.07 00:14:15.213 clat percentiles (usec): 00:14:15.213 | 1.00th=[ 322], 5.00th=[ 412], 10.00th=[ 453], 20.00th=[ 510], 00:14:15.213 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 652], 00:14:15.213 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 791], 00:14:15.213 | 99.00th=[ 906], 99.50th=[ 1004], 99.90th=[ 1029], 99.95th=[ 1029], 00:14:15.213 | 99.99th=[ 1029] 00:14:15.213 bw ( KiB/s): min= 4104, max= 4104, per=47.99%, avg=4104.00, stdev= 0.00, samples=1 00:14:15.213 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:14:15.213 lat (usec) : 250=0.17%, 500=10.34%, 750=39.61%, 1000=14.13% 00:14:15.213 lat (msec) : 2=35.74% 00:14:15.213 cpu : usr=1.90%, sys=3.60%, ctx=1191, majf=0, minf=1 00:14:15.213 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:15.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.213 issued rwts: total=512,677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.213 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:15.213 job3: (groupid=0, jobs=1): err= 0: pid=1842940: Wed Nov 20 08:11:19 2024 00:14:15.213 read: IOPS=18, BW=73.4KiB/s (75.2kB/s)(76.0KiB/1035msec) 00:14:15.213 slat (nsec): min=25214, max=26143, avg=25524.95, stdev=193.57 00:14:15.213 clat (usec): min=735, max=42693, avg=39830.67, stdev=9468.83 00:14:15.213 lat (usec): min=761, max=42718, avg=39856.20, stdev=9468.68 00:14:15.213 clat percentiles (usec): 00:14:15.213 | 1.00th=[ 734], 5.00th=[ 734], 10.00th=[41681], 20.00th=[41681], 00:14:15.213 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:14:15.213 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:14:15.213 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:14:15.213 | 99.99th=[42730] 00:14:15.213 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:14:15.213 slat (nsec): min=9434, max=51561, avg=27983.71, stdev=9077.11 00:14:15.213 clat (usec): min=131, max=828, avg=506.23, stdev=123.77 00:14:15.213 lat (usec): min=141, max=875, avg=534.22, stdev=127.70 00:14:15.213 clat percentiles (usec): 00:14:15.213 | 1.00th=[ 188], 5.00th=[ 289], 10.00th=[ 343], 20.00th=[ 400], 00:14:15.213 | 30.00th=[ 441], 40.00th=[ 478], 50.00th=[ 510], 60.00th=[ 553], 00:14:15.213 | 70.00th=[ 578], 80.00th=[ 611], 90.00th=[ 660], 95.00th=[ 693], 00:14:15.213 | 99.00th=[ 775], 99.50th=[ 824], 99.90th=[ 832], 99.95th=[ 832], 00:14:15.213 | 99.99th=[ 832] 00:14:15.213 bw ( KiB/s): min= 4096, max= 4096, per=47.89%, avg=4096.00, stdev= 0.00, samples=1 00:14:15.213 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:15.213 lat (usec) : 250=2.07%, 500=42.37%, 750=50.28%, 1000=1.88% 00:14:15.213 lat (msec) : 50=3.39% 00:14:15.213 cpu : usr=0.97%, sys=1.06%, ctx=531, majf=0, minf=1 00:14:15.213 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:15.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.213 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.213 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:15.213 00:14:15.213 Run status group 0 (all jobs): 00:14:15.213 READ: bw=2203KiB/s (2256kB/s), 73.4KiB/s-2046KiB/s (75.2kB/s-2095kB/s), io=2280KiB (2335kB), run=1001-1035msec 00:14:15.213 WRITE: bw=8553KiB/s (8758kB/s), 1979KiB/s-2705KiB/s (2026kB/s-2770kB/s), io=8852KiB (9064kB), run=1001-1035msec 00:14:15.213 00:14:15.213 Disk stats (read/write): 00:14:15.213 nvme0n1: ios=64/512, merge=0/0, ticks=597/197, in_queue=794, util=85.17% 00:14:15.213 nvme0n2: ios=53/512, merge=0/0, ticks=688/180, in_queue=868, util=87.87% 00:14:15.213 nvme0n3: ios=514/512, merge=0/0, ticks=1003/306, in_queue=1309, util=96.83% 00:14:15.213 nvme0n4: ios=52/512, merge=0/0, ticks=598/242, in_queue=840, util=91.55% 00:14:15.213 08:11:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:15.213 [global] 00:14:15.213 thread=1 00:14:15.213 invalidate=1 00:14:15.213 rw=write 00:14:15.213 time_based=1 00:14:15.213 runtime=1 00:14:15.213 ioengine=libaio 00:14:15.213 direct=1 00:14:15.213 bs=4096 00:14:15.213 iodepth=128 00:14:15.213 norandommap=0 00:14:15.213 numjobs=1 00:14:15.213 00:14:15.213 verify_dump=1 00:14:15.213 verify_backlog=512 00:14:15.213 verify_state_save=0 00:14:15.213 do_verify=1 00:14:15.213 verify=crc32c-intel 00:14:15.213 [job0] 00:14:15.213 filename=/dev/nvme0n1 00:14:15.213 [job1] 00:14:15.213 filename=/dev/nvme0n2 00:14:15.213 [job2] 00:14:15.213 filename=/dev/nvme0n3 00:14:15.213 [job3] 00:14:15.213 filename=/dev/nvme0n4 00:14:15.213 Could not set queue depth (nvme0n1) 00:14:15.213 Could not set queue depth (nvme0n2) 00:14:15.213 Could not set queue depth (nvme0n3) 00:14:15.213 Could not set queue depth (nvme0n4) 00:14:15.473 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.473 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.473 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.474 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.474 fio-3.35 00:14:15.474 Starting 4 threads 00:14:16.861 00:14:16.861 job0: (groupid=0, jobs=1): err= 0: pid=1843433: Wed Nov 20 08:11:21 2024 00:14:16.861 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.7MiB/1049msec) 00:14:16.861 slat (nsec): min=978, max=21023k, avg=118636.43, stdev=946750.13 00:14:16.861 clat (usec): min=2431, max=71624, avg=17046.91, stdev=11564.70 00:14:16.861 lat (usec): min=2439, max=71625, avg=17165.55, stdev=11642.79 00:14:16.861 clat percentiles (usec): 00:14:16.861 | 1.00th=[ 3851], 5.00th=[ 5145], 10.00th=[ 6718], 20.00th=[ 8356], 00:14:16.861 | 30.00th=[ 9634], 40.00th=[11731], 50.00th=[14091], 60.00th=[15533], 00:14:16.861 | 70.00th=[18220], 80.00th=[25035], 90.00th=[31851], 95.00th=[37487], 00:14:16.861 | 99.00th=[62653], 99.50th=[65274], 99.90th=[70779], 99.95th=[70779], 00:14:16.861 | 99.99th=[71828] 00:14:16.861 write: IOPS=4392, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1049msec); 0 zone resets 00:14:16.861 slat (nsec): min=1680, max=14583k, avg=90243.92, stdev=651164.75 00:14:16.861 clat (usec): min=383, max=45623, avg=13033.53, stdev=9075.73 00:14:16.861 lat (usec): min=392, max=46688, avg=13123.77, stdev=9156.45 00:14:16.861 clat percentiles (usec): 00:14:16.861 | 1.00th=[ 971], 5.00th=[ 2671], 10.00th=[ 4293], 20.00th=[ 6259], 00:14:16.861 | 30.00th=[ 7373], 40.00th=[ 8717], 50.00th=[ 9765], 60.00th=[10814], 00:14:16.861 | 70.00th=[13566], 80.00th=[22414], 90.00th=[27657], 95.00th=[30802], 00:14:16.861 | 99.00th=[38011], 99.50th=[41157], 99.90th=[45351], 99.95th=[45876], 00:14:16.861 | 99.99th=[45876] 00:14:16.861 bw ( KiB/s): min=12288, max=24576, per=21.84%, avg=18432.00, stdev=8688.93, samples=2 00:14:16.861 iops : min= 3072, max= 6144, avg=4608.00, stdev=2172.23, samples=2 00:14:16.861 lat (usec) : 500=0.17%, 750=0.05%, 1000=0.34% 00:14:16.861 lat (msec) : 2=0.98%, 4=3.85%, 10=38.31%, 20=30.65%, 50=24.23% 00:14:16.861 lat (msec) : 100=1.43% 00:14:16.861 cpu : usr=2.96%, sys=5.44%, ctx=404, majf=0, minf=1 00:14:16.861 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:16.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.861 issued rwts: total=4275,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.861 job1: (groupid=0, jobs=1): err= 0: pid=1843442: Wed Nov 20 08:11:21 2024 00:14:16.861 read: IOPS=9689, BW=37.8MiB/s (39.7MB/s)(38.0MiB/1004msec) 00:14:16.861 slat (nsec): min=957, max=11022k, avg=51912.14, stdev=386528.00 00:14:16.861 clat (usec): min=1847, max=38272, avg=6777.42, stdev=2552.93 00:14:16.861 lat (usec): min=2069, max=38280, avg=6829.33, stdev=2581.45 00:14:16.861 clat percentiles (usec): 00:14:16.861 | 1.00th=[ 3458], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5276], 00:14:16.861 | 30.00th=[ 5669], 40.00th=[ 5932], 50.00th=[ 6325], 60.00th=[ 6718], 00:14:16.861 | 70.00th=[ 7177], 80.00th=[ 7767], 90.00th=[ 9110], 95.00th=[10159], 00:14:16.861 | 99.00th=[13698], 99.50th=[27395], 99.90th=[35914], 99.95th=[35914], 00:14:16.861 | 99.99th=[38011] 00:14:16.861 write: IOPS=9949, BW=38.9MiB/s (40.8MB/s)(39.0MiB/1004msec); 0 zone resets 00:14:16.861 slat (nsec): min=1589, max=20006k, avg=44734.99, stdev=365863.50 00:14:16.861 clat (usec): min=1309, max=37894, avg=6150.98, stdev=3902.09 00:14:16.861 lat (usec): min=1320, max=37906, avg=6195.71, stdev=3922.53 00:14:16.861 clat percentiles (usec): 00:14:16.861 | 1.00th=[ 2057], 5.00th=[ 3130], 10.00th=[ 3621], 20.00th=[ 4359], 00:14:16.861 | 30.00th=[ 5014], 40.00th=[ 5473], 50.00th=[ 5669], 60.00th=[ 5866], 00:14:16.861 | 70.00th=[ 6063], 80.00th=[ 6652], 90.00th=[ 7767], 95.00th=[ 9241], 00:14:16.861 | 99.00th=[32900], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:14:16.861 | 99.99th=[38011] 00:14:16.862 bw ( KiB/s): min=34928, max=43968, per=46.75%, avg=39448.00, stdev=6392.25, samples=2 00:14:16.862 iops : min= 8732, max=10992, avg=9862.00, stdev=1598.06, samples=2 00:14:16.862 lat (msec) : 2=0.48%, 4=8.95%, 10=85.47%, 20=3.80%, 50=1.30% 00:14:16.862 cpu : usr=6.38%, sys=8.08%, ctx=737, majf=0, minf=2 00:14:16.862 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:14:16.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.862 issued rwts: total=9728,9989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.862 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.862 job2: (groupid=0, jobs=1): err= 0: pid=1843448: Wed Nov 20 08:11:21 2024 00:14:16.862 read: IOPS=2781, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1008msec) 00:14:16.862 slat (nsec): min=1055, max=10342k, avg=114137.37, stdev=719237.11 00:14:16.862 clat (usec): min=4089, max=42105, avg=13439.33, stdev=6244.64 00:14:16.862 lat (usec): min=4100, max=42114, avg=13553.47, stdev=6303.14 00:14:16.862 clat percentiles (usec): 00:14:16.862 | 1.00th=[ 7242], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[ 9634], 00:14:16.862 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:14:16.862 | 70.00th=[13042], 80.00th=[16188], 90.00th=[20841], 95.00th=[29230], 00:14:16.862 | 99.00th=[37487], 99.50th=[38011], 99.90th=[42206], 99.95th=[42206], 00:14:16.862 | 99.99th=[42206] 00:14:16.862 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:14:16.862 slat (nsec): min=1755, max=52105k, avg=215514.77, stdev=2090958.89 00:14:16.862 clat (msec): min=3, max=240, avg=23.29, stdev=22.80 00:14:16.862 lat (msec): min=3, max=240, avg=23.50, stdev=23.16 00:14:16.862 clat percentiles (msec): 00:14:16.862 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:14:16.862 | 30.00th=[ 9], 40.00th=[ 12], 50.00th=[ 18], 60.00th=[ 23], 00:14:16.862 | 70.00th=[ 29], 80.00th=[ 35], 90.00th=[ 49], 95.00th=[ 52], 00:14:16.862 | 99.00th=[ 146], 99.50th=[ 186], 99.90th=[ 228], 99.95th=[ 241], 00:14:16.862 | 99.99th=[ 241] 00:14:16.862 bw ( KiB/s): min= 8192, max=16384, per=14.56%, avg=12288.00, stdev=5792.62, samples=2 00:14:16.862 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:14:16.862 lat (msec) : 4=0.43%, 10=31.54%, 20=38.19%, 50=27.04%, 100=2.26% 00:14:16.862 lat (msec) : 250=0.54% 00:14:16.862 cpu : usr=1.89%, sys=4.07%, ctx=273, majf=0, minf=2 00:14:16.862 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:14:16.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.862 issued rwts: total=2804,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.862 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.862 job3: (groupid=0, jobs=1): err= 0: pid=1843455: Wed Nov 20 08:11:21 2024 00:14:16.862 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:14:16.862 slat (nsec): min=910, max=19719k, avg=122617.28, stdev=823802.79 00:14:16.862 clat (usec): min=5326, max=49197, avg=15622.23, stdev=7071.34 00:14:16.862 lat (usec): min=5333, max=49221, avg=15744.85, stdev=7144.14 00:14:16.862 clat percentiles (usec): 00:14:16.862 | 1.00th=[ 6194], 5.00th=[ 7832], 10.00th=[ 9241], 20.00th=[10290], 00:14:16.862 | 30.00th=[11469], 40.00th=[12649], 50.00th=[14353], 60.00th=[15139], 00:14:16.862 | 70.00th=[16712], 80.00th=[18482], 90.00th=[25822], 95.00th=[29492], 00:14:16.862 | 99.00th=[40109], 99.50th=[42730], 99.90th=[43254], 99.95th=[46924], 00:14:16.862 | 99.99th=[49021] 00:14:16.862 write: IOPS=4425, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1008msec); 0 zone resets 00:14:16.862 slat (nsec): min=1561, max=14587k, avg=105003.01, stdev=765139.38 00:14:16.862 clat (usec): min=756, max=56891, avg=14318.12, stdev=8536.90 00:14:16.862 lat (usec): min=1218, max=56898, avg=14423.13, stdev=8603.71 00:14:16.862 clat percentiles (usec): 00:14:16.862 | 1.00th=[ 3294], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 8717], 00:14:16.862 | 30.00th=[ 9634], 40.00th=[10552], 50.00th=[11469], 60.00th=[12649], 00:14:16.862 | 70.00th=[13698], 80.00th=[19792], 90.00th=[25035], 95.00th=[32637], 00:14:16.862 | 99.00th=[47449], 99.50th=[53216], 99.90th=[56886], 99.95th=[56886], 00:14:16.862 | 99.99th=[56886] 00:14:16.862 bw ( KiB/s): min=14184, max=20480, per=20.54%, avg=17332.00, stdev=4451.94, samples=2 00:14:16.862 iops : min= 3546, max= 5120, avg=4333.00, stdev=1112.99, samples=2 00:14:16.862 lat (usec) : 1000=0.01% 00:14:16.862 lat (msec) : 2=0.15%, 4=0.68%, 10=25.70%, 20=55.01%, 50=18.18% 00:14:16.862 lat (msec) : 100=0.27% 00:14:16.862 cpu : usr=2.98%, sys=4.17%, ctx=359, majf=0, minf=1 00:14:16.862 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:16.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.862 issued rwts: total=4096,4461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.862 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.862 00:14:16.862 Run status group 0 (all jobs): 00:14:16.862 READ: bw=77.8MiB/s (81.6MB/s), 10.9MiB/s-37.8MiB/s (11.4MB/s-39.7MB/s), io=81.7MiB (85.6MB), run=1004-1049msec 00:14:16.862 WRITE: bw=82.4MiB/s (86.4MB/s), 11.9MiB/s-38.9MiB/s (12.5MB/s-40.8MB/s), io=86.4MiB (90.6MB), run=1004-1049msec 00:14:16.862 00:14:16.862 Disk stats (read/write): 00:14:16.862 nvme0n1: ios=3994/4096, merge=0/0, ticks=39171/32971, in_queue=72142, util=97.19% 00:14:16.862 nvme0n2: ios=7716/8192, merge=0/0, ticks=50020/45331, in_queue=95351, util=86.75% 00:14:16.862 nvme0n3: ios=2085/2551, merge=0/0, ticks=26169/41010, in_queue=67179, util=97.25% 00:14:16.862 nvme0n4: ios=3529/3584, merge=0/0, ticks=27631/25570, in_queue=53201, util=95.07% 00:14:16.862 08:11:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:16.862 [global] 00:14:16.862 thread=1 00:14:16.862 invalidate=1 00:14:16.862 rw=randwrite 00:14:16.862 time_based=1 00:14:16.862 runtime=1 00:14:16.862 ioengine=libaio 00:14:16.862 direct=1 00:14:16.862 bs=4096 00:14:16.862 iodepth=128 00:14:16.862 norandommap=0 00:14:16.862 numjobs=1 00:14:16.862 00:14:16.862 verify_dump=1 00:14:16.862 verify_backlog=512 00:14:16.862 verify_state_save=0 00:14:16.862 do_verify=1 00:14:16.862 verify=crc32c-intel 00:14:16.862 [job0] 00:14:16.862 filename=/dev/nvme0n1 00:14:16.862 [job1] 00:14:16.862 filename=/dev/nvme0n2 00:14:16.862 [job2] 00:14:16.862 filename=/dev/nvme0n3 00:14:16.862 [job3] 00:14:16.862 filename=/dev/nvme0n4 00:14:16.862 Could not set queue depth (nvme0n1) 00:14:16.862 Could not set queue depth (nvme0n2) 00:14:16.862 Could not set queue depth (nvme0n3) 00:14:16.862 Could not set queue depth (nvme0n4) 00:14:17.430 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:17.430 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:17.430 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:17.430 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:17.430 fio-3.35 00:14:17.430 Starting 4 threads 00:14:18.836 00:14:18.836 job0: (groupid=0, jobs=1): err= 0: pid=1843967: Wed Nov 20 08:11:23 2024 00:14:18.836 read: IOPS=7214, BW=28.2MiB/s (29.5MB/s)(29.5MiB/1046msec) 00:14:18.836 slat (nsec): min=898, max=13916k, avg=61992.44, stdev=497652.79 00:14:18.836 clat (usec): min=3393, max=47862, avg=9160.89, stdev=6011.41 00:14:18.836 lat (usec): min=3399, max=53405, avg=9222.89, stdev=6036.96 00:14:18.836 clat percentiles (usec): 00:14:18.836 | 1.00th=[ 4490], 5.00th=[ 5604], 10.00th=[ 5866], 20.00th=[ 6128], 00:14:18.836 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 7439], 60.00th=[ 7963], 00:14:18.836 | 70.00th=[ 8848], 80.00th=[10159], 90.00th=[14353], 95.00th=[18220], 00:14:18.836 | 99.00th=[46924], 99.50th=[47449], 99.90th=[47973], 99.95th=[47973], 00:14:18.836 | 99.99th=[47973] 00:14:18.836 write: IOPS=7342, BW=28.7MiB/s (30.1MB/s)(30.0MiB/1046msec); 0 zone resets 00:14:18.836 slat (nsec): min=1494, max=13903k, avg=60308.13, stdev=393156.06 00:14:18.836 clat (usec): min=471, max=24821, avg=8279.64, stdev=3944.32 00:14:18.836 lat (usec): min=501, max=24830, avg=8339.95, stdev=3979.45 00:14:18.836 clat percentiles (usec): 00:14:18.836 | 1.00th=[ 1647], 5.00th=[ 3818], 10.00th=[ 5342], 20.00th=[ 5800], 00:14:18.836 | 30.00th=[ 6063], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 7111], 00:14:18.836 | 70.00th=[ 7767], 80.00th=[11863], 90.00th=[14484], 95.00th=[15533], 00:14:18.836 | 99.00th=[21365], 99.50th=[21365], 99.90th=[24773], 99.95th=[24773], 00:14:18.836 | 99.99th=[24773] 00:14:18.836 bw ( KiB/s): min=28672, max=32768, per=34.65%, avg=30720.00, stdev=2896.31, samples=2 00:14:18.836 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:14:18.836 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.13% 00:14:18.836 lat (msec) : 2=0.71%, 4=1.98%, 10=73.96%, 20=21.10%, 50=2.04% 00:14:18.836 cpu : usr=5.45%, sys=7.08%, ctx=783, majf=0, minf=1 00:14:18.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:18.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:18.836 issued rwts: total=7546,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.836 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:18.836 job1: (groupid=0, jobs=1): err= 0: pid=1843975: Wed Nov 20 08:11:23 2024 00:14:18.836 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:14:18.836 slat (nsec): min=893, max=10916k, avg=75817.39, stdev=510034.24 00:14:18.836 clat (usec): min=1252, max=36150, avg=9878.42, stdev=5123.56 00:14:18.836 lat (usec): min=1261, max=36175, avg=9954.23, stdev=5163.34 00:14:18.836 clat percentiles (usec): 00:14:18.836 | 1.00th=[ 1450], 5.00th=[ 5473], 10.00th=[ 6915], 20.00th=[ 7373], 00:14:18.836 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8455], 00:14:18.836 | 70.00th=[10028], 80.00th=[11731], 90.00th=[16909], 95.00th=[21890], 00:14:18.836 | 99.00th=[30016], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:14:18.836 | 99.99th=[35914] 00:14:18.836 write: IOPS=7008, BW=27.4MiB/s (28.7MB/s)(27.5MiB/1005msec); 0 zone resets 00:14:18.836 slat (nsec): min=1479, max=14395k, avg=63517.04, stdev=459872.18 00:14:18.836 clat (usec): min=595, max=48783, avg=8753.68, stdev=4179.02 00:14:18.836 lat (usec): min=603, max=48792, avg=8817.20, stdev=4207.04 00:14:18.836 clat percentiles (usec): 00:14:18.836 | 1.00th=[ 1991], 5.00th=[ 5276], 10.00th=[ 5997], 20.00th=[ 6718], 00:14:18.836 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8291], 00:14:18.836 | 70.00th=[ 8848], 80.00th=[ 9503], 90.00th=[11731], 95.00th=[15926], 00:14:18.836 | 99.00th=[28967], 99.50th=[32637], 99.90th=[35390], 99.95th=[40633], 00:14:18.836 | 99.99th=[49021] 00:14:18.836 bw ( KiB/s): min=22560, max=32768, per=31.20%, avg=27664.00, stdev=7218.15, samples=2 00:14:18.836 iops : min= 5640, max= 8192, avg=6916.00, stdev=1804.54, samples=2 00:14:18.836 lat (usec) : 750=0.04%, 1000=0.01% 00:14:18.836 lat (msec) : 2=1.29%, 4=2.35%, 10=74.57%, 20=17.40%, 50=4.34% 00:14:18.836 cpu : usr=3.49%, sys=5.08%, ctx=595, majf=0, minf=1 00:14:18.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:18.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:18.836 issued rwts: total=6656,7044,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.836 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:18.836 job2: (groupid=0, jobs=1): err= 0: pid=1843982: Wed Nov 20 08:11:23 2024 00:14:18.836 read: IOPS=3527, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1006msec) 00:14:18.836 slat (nsec): min=919, max=26431k, avg=134239.67, stdev=933474.00 00:14:18.836 clat (msec): min=2, max=122, avg=17.99, stdev=16.71 00:14:18.836 lat (msec): min=10, max=125, avg=18.12, stdev=16.78 00:14:18.836 clat percentiles (msec): 00:14:18.836 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:14:18.836 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 14], 00:14:18.836 | 70.00th=[ 14], 80.00th=[ 15], 90.00th=[ 31], 95.00th=[ 40], 00:14:18.836 | 99.00th=[ 113], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 123], 00:14:18.836 | 99.99th=[ 123] 00:14:18.836 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:14:18.836 slat (nsec): min=1534, max=24519k, avg=142351.56, stdev=891121.28 00:14:18.836 clat (msec): min=7, max=101, avg=17.25, stdev=13.97 00:14:18.836 lat (msec): min=8, max=101, avg=17.39, stdev=14.08 00:14:18.836 clat percentiles (msec): 00:14:18.836 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:14:18.836 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:14:18.836 | 70.00th=[ 15], 80.00th=[ 19], 90.00th=[ 30], 95.00th=[ 47], 00:14:18.836 | 99.00th=[ 78], 99.50th=[ 99], 99.90th=[ 103], 99.95th=[ 103], 00:14:18.836 | 99.99th=[ 103] 00:14:18.836 bw ( KiB/s): min= 9488, max=19184, per=16.17%, avg=14336.00, stdev=6856.11, samples=2 00:14:18.836 iops : min= 2372, max= 4796, avg=3584.00, stdev=1714.03, samples=2 00:14:18.836 lat (msec) : 4=0.01%, 10=2.29%, 20=80.15%, 50=12.97%, 100=3.59% 00:14:18.836 lat (msec) : 250=1.00% 00:14:18.836 cpu : usr=1.99%, sys=2.59%, ctx=536, majf=0, minf=2 00:14:18.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:14:18.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:18.836 issued rwts: total=3549,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.836 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:18.836 job3: (groupid=0, jobs=1): err= 0: pid=1843989: Wed Nov 20 08:11:23 2024 00:14:18.836 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:14:18.836 slat (nsec): min=938, max=21501k, avg=102889.74, stdev=861614.73 00:14:18.836 clat (usec): min=2055, max=62590, avg=12959.70, stdev=7768.30 00:14:18.836 lat (usec): min=2064, max=62594, avg=13062.59, stdev=7845.53 00:14:18.836 clat percentiles (usec): 00:14:18.836 | 1.00th=[ 5735], 5.00th=[ 6915], 10.00th=[ 7898], 20.00th=[ 8225], 00:14:18.836 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[10290], 60.00th=[12256], 00:14:18.837 | 70.00th=[13698], 80.00th=[17433], 90.00th=[21103], 95.00th=[25822], 00:14:18.837 | 99.00th=[54789], 99.50th=[58459], 99.90th=[62653], 99.95th=[62653], 00:14:18.837 | 99.99th=[62653] 00:14:18.837 write: IOPS=4854, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1005msec); 0 zone resets 00:14:18.837 slat (nsec): min=1538, max=11832k, avg=92536.46, stdev=561649.47 00:14:18.837 clat (usec): min=614, max=65832, avg=13889.81, stdev=11576.86 00:14:18.837 lat (usec): min=644, max=65840, avg=13982.35, stdev=11646.34 00:14:18.837 clat percentiles (usec): 00:14:18.837 | 1.00th=[ 1450], 5.00th=[ 2704], 10.00th=[ 4228], 20.00th=[ 7439], 00:14:18.837 | 30.00th=[ 7898], 40.00th=[ 8160], 50.00th=[ 9896], 60.00th=[12649], 00:14:18.837 | 70.00th=[14091], 80.00th=[19006], 90.00th=[28967], 95.00th=[39584], 00:14:18.837 | 99.00th=[59507], 99.50th=[61604], 99.90th=[65799], 99.95th=[65799], 00:14:18.837 | 99.99th=[65799] 00:14:18.837 bw ( KiB/s): min=17552, max=20464, per=21.44%, avg=19008.00, stdev=2059.09, samples=2 00:14:18.837 iops : min= 4388, max= 5116, avg=4752.00, stdev=514.77, samples=2 00:14:18.837 lat (usec) : 750=0.03%, 1000=0.06% 00:14:18.837 lat (msec) : 2=1.62%, 4=3.03%, 10=45.29%, 20=34.79%, 50=13.04% 00:14:18.837 lat (msec) : 100=2.13% 00:14:18.837 cpu : usr=3.69%, sys=5.38%, ctx=480, majf=0, minf=1 00:14:18.837 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:18.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:18.837 issued rwts: total=4608,4879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.837 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:18.837 00:14:18.837 Run status group 0 (all jobs): 00:14:18.837 READ: bw=83.5MiB/s (87.6MB/s), 13.8MiB/s-28.2MiB/s (14.5MB/s-29.5MB/s), io=87.3MiB (91.6MB), run=1005-1046msec 00:14:18.837 WRITE: bw=86.6MiB/s (90.8MB/s), 13.9MiB/s-28.7MiB/s (14.6MB/s-30.1MB/s), io=90.6MiB (95.0MB), run=1005-1046msec 00:14:18.837 00:14:18.837 Disk stats (read/write): 00:14:18.837 nvme0n1: ios=6574/6656, merge=0/0, ticks=41150/35986, in_queue=77136, util=91.98% 00:14:18.837 nvme0n2: ios=5852/6171, merge=0/0, ticks=26511/25253, in_queue=51764, util=95.20% 00:14:18.837 nvme0n3: ios=2560/3047, merge=0/0, ticks=11396/15096, in_queue=26492, util=88.27% 00:14:18.837 nvme0n4: ios=3296/3584, merge=0/0, ticks=46279/56254, in_queue=102533, util=96.25% 00:14:18.837 08:11:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:18.837 08:11:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1844283 00:14:18.837 08:11:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:18.837 08:11:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:18.837 [global] 00:14:18.837 thread=1 00:14:18.837 invalidate=1 00:14:18.837 rw=read 00:14:18.837 time_based=1 00:14:18.837 runtime=10 00:14:18.837 ioengine=libaio 00:14:18.837 direct=1 00:14:18.837 bs=4096 00:14:18.837 iodepth=1 00:14:18.837 norandommap=1 00:14:18.837 numjobs=1 00:14:18.837 00:14:18.837 [job0] 00:14:18.837 filename=/dev/nvme0n1 00:14:18.837 [job1] 00:14:18.837 filename=/dev/nvme0n2 00:14:18.837 [job2] 00:14:18.837 filename=/dev/nvme0n3 00:14:18.837 [job3] 00:14:18.837 filename=/dev/nvme0n4 00:14:18.837 Could not set queue depth (nvme0n1) 00:14:18.837 Could not set queue depth (nvme0n2) 00:14:18.837 Could not set queue depth (nvme0n3) 00:14:18.837 Could not set queue depth (nvme0n4) 00:14:19.096 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:19.096 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:19.096 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:19.096 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:19.096 fio-3.35 00:14:19.096 Starting 4 threads 00:14:21.644 08:11:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:21.644 08:11:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:21.905 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=8232960, buflen=4096 00:14:21.905 fio: pid=1844547, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:21.905 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11481088, buflen=4096 00:14:21.905 fio: pid=1844534, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:21.905 08:11:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:21.905 08:11:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:22.166 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2072576, buflen=4096 00:14:22.166 fio: pid=1844495, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:22.166 08:11:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:22.166 08:11:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:22.428 08:11:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:22.428 08:11:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:22.428 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=3248128, buflen=4096 00:14:22.428 fio: pid=1844502, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:14:22.428 00:14:22.428 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1844495: Wed Nov 20 08:11:26 2024 00:14:22.428 read: IOPS=174, BW=695KiB/s (711kB/s)(2024KiB/2913msec) 00:14:22.428 slat (usec): min=6, max=27946, avg=108.69, stdev=1370.38 00:14:22.428 clat (usec): min=493, max=41873, avg=5597.27, stdev=12756.41 00:14:22.428 lat (usec): min=508, max=41899, avg=5706.12, stdev=12800.89 00:14:22.428 clat percentiles (usec): 00:14:22.428 | 1.00th=[ 766], 5.00th=[ 857], 10.00th=[ 898], 20.00th=[ 963], 00:14:22.428 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[ 1037], 60.00th=[ 1057], 00:14:22.428 | 70.00th=[ 1074], 80.00th=[ 1123], 90.00th=[41157], 95.00th=[41157], 00:14:22.428 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:14:22.428 | 99.99th=[41681] 00:14:22.428 bw ( KiB/s): min= 96, max= 1680, per=6.65%, avg=523.20, stdev=689.43, samples=5 00:14:22.428 iops : min= 24, max= 420, avg=130.80, stdev=172.36, samples=5 00:14:22.428 lat (usec) : 500=0.20%, 750=0.59%, 1000=34.12% 00:14:22.428 lat (msec) : 2=53.45%, 50=11.44% 00:14:22.428 cpu : usr=0.31%, sys=0.69%, ctx=509, majf=0, minf=1 00:14:22.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.428 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.428 issued rwts: total=507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.428 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.428 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1844502: Wed Nov 20 08:11:26 2024 00:14:22.428 read: IOPS=255, BW=1021KiB/s (1045kB/s)(3172KiB/3108msec) 00:14:22.428 slat (usec): min=7, max=6803, avg=39.67, stdev=273.39 00:14:22.428 clat (usec): min=638, max=42731, avg=3872.44, stdev=10446.93 00:14:22.428 lat (usec): min=664, max=45978, avg=3903.59, stdev=10466.46 00:14:22.428 clat percentiles (usec): 00:14:22.428 | 1.00th=[ 734], 5.00th=[ 824], 10.00th=[ 865], 20.00th=[ 930], 00:14:22.428 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1037], 00:14:22.428 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[41681], 00:14:22.428 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:14:22.428 | 99.99th=[42730] 00:14:22.428 bw ( KiB/s): min= 96, max= 3888, per=13.40%, avg=1054.00, stdev=1592.67, samples=6 00:14:22.428 iops : min= 24, max= 972, avg=263.50, stdev=398.17, samples=6 00:14:22.428 lat (usec) : 750=1.13%, 1000=39.80% 00:14:22.428 lat (msec) : 2=51.89%, 50=7.05% 00:14:22.428 cpu : usr=0.19%, sys=1.06%, ctx=797, majf=0, minf=2 00:14:22.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.428 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.428 issued rwts: total=794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.428 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.428 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1844534: Wed Nov 20 08:11:26 2024 00:14:22.428 read: IOPS=1032, BW=4128KiB/s (4227kB/s)(10.9MiB/2716msec) 00:14:22.428 slat (usec): min=6, max=12813, avg=35.48, stdev=320.27 00:14:22.428 clat (usec): min=318, max=1842, avg=919.14, stdev=153.87 00:14:22.428 lat (usec): min=338, max=13630, avg=954.62, stdev=352.82 00:14:22.428 clat percentiles (usec): 00:14:22.428 | 1.00th=[ 416], 5.00th=[ 578], 10.00th=[ 685], 20.00th=[ 824], 00:14:22.428 | 30.00th=[ 898], 40.00th=[ 947], 50.00th=[ 971], 60.00th=[ 988], 00:14:22.428 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:14:22.428 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1205], 99.95th=[ 1221], 00:14:22.428 | 99.99th=[ 1844] 00:14:22.428 bw ( KiB/s): min= 3904, max= 4624, per=51.92%, avg=4084.80, stdev=304.29, samples=5 00:14:22.428 iops : min= 976, max= 1156, avg=1021.20, stdev=76.07, samples=5 00:14:22.428 lat (usec) : 500=2.21%, 750=11.88%, 1000=50.53% 00:14:22.428 lat (msec) : 2=35.34% 00:14:22.428 cpu : usr=1.84%, sys=4.13%, ctx=2806, majf=0, minf=2 00:14:22.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.428 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.428 issued rwts: total=2804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.428 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.428 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1844547: Wed Nov 20 08:11:26 2024 00:14:22.428 read: IOPS=785, BW=3141KiB/s (3216kB/s)(8040KiB/2560msec) 00:14:22.428 slat (nsec): min=6731, max=64122, avg=28570.13, stdev=3662.31 00:14:22.428 clat (usec): min=563, max=41761, avg=1227.57, stdev=2819.83 00:14:22.428 lat (usec): min=593, max=41790, avg=1256.14, stdev=2819.68 00:14:22.428 clat percentiles (usec): 00:14:22.428 | 1.00th=[ 758], 5.00th=[ 865], 10.00th=[ 922], 20.00th=[ 971], 00:14:22.428 | 30.00th=[ 1004], 40.00th=[ 1020], 50.00th=[ 1037], 60.00th=[ 1057], 00:14:22.428 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:14:22.428 | 99.00th=[ 1237], 99.50th=[ 3621], 99.90th=[41157], 99.95th=[41681], 00:14:22.428 | 99.99th=[41681] 00:14:22.428 bw ( KiB/s): min= 2192, max= 3816, per=40.86%, avg=3214.40, stdev=774.34, samples=5 00:14:22.428 iops : min= 548, max= 954, avg=803.60, stdev=193.58, samples=5 00:14:22.428 lat (usec) : 750=0.80%, 1000=28.69% 00:14:22.428 lat (msec) : 2=69.92%, 4=0.05%, 50=0.50% 00:14:22.428 cpu : usr=1.37%, sys=3.36%, ctx=2011, majf=0, minf=2 00:14:22.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.428 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.428 issued rwts: total=2011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.428 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.428 00:14:22.428 Run status group 0 (all jobs): 00:14:22.428 READ: bw=7866KiB/s (8055kB/s), 695KiB/s-4128KiB/s (711kB/s-4227kB/s), io=23.9MiB (25.0MB), run=2560-3108msec 00:14:22.428 00:14:22.428 Disk stats (read/write): 00:14:22.428 nvme0n1: ios=503/0, merge=0/0, ticks=2675/0, in_queue=2675, util=91.35% 00:14:22.428 nvme0n2: ios=791/0, merge=0/0, ticks=2987/0, in_queue=2987, util=94.20% 00:14:22.428 nvme0n3: ios=2586/0, merge=0/0, ticks=2255/0, in_queue=2255, util=95.54% 00:14:22.428 nvme0n4: ios=2009/0, merge=0/0, ticks=2237/0, in_queue=2237, util=96.38% 00:14:22.428 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:22.428 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:22.690 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:22.690 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:22.951 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:22.951 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:22.951 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:22.951 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:23.212 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:23.212 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1844283 00:14:23.212 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:23.212 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.212 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:23.212 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:14:23.212 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:23.212 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.212 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:23.212 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.212 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:14:23.212 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:23.212 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:23.212 nvmf hotplug test: fio failed as expected 00:14:23.212 08:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.472 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:23.472 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:23.472 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:23.472 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:23.472 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:23.472 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:23.472 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:14:23.472 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:23.472 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:14:23.472 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:23.472 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:23.472 rmmod nvme_tcp 00:14:23.472 rmmod nvme_fabrics 00:14:23.472 rmmod nvme_keyring 00:14:23.472 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:23.732 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:14:23.732 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:14:23.732 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 1840759 ']' 00:14:23.732 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 1840759 00:14:23.732 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1840759 ']' 00:14:23.732 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1840759 00:14:23.732 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:14:23.732 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.732 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1840759 00:14:23.732 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:23.733 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:23.733 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1840759' 00:14:23.733 killing process with pid 1840759 00:14:23.733 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1840759 00:14:23.733 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1840759 00:14:23.733 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:23.733 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:14:23.733 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@254 -- # local dev 00:14:23.733 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:23.733 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:23.733 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:23.733 08:11:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # return 0 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@274 -- # iptr 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-save 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-restore 00:14:26.286 00:14:26.286 real 0m30.334s 00:14:26.286 user 2m41.499s 00:14:26.286 sys 0m10.389s 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.286 ************************************ 00:14:26.286 END TEST nvmf_fio_target 00:14:26.286 ************************************ 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:26.286 ************************************ 00:14:26.286 START TEST nvmf_bdevio 00:14:26.286 ************************************ 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:26.286 * Looking for test storage... 00:14:26.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:14:26.286 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:26.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.287 --rc genhtml_branch_coverage=1 00:14:26.287 --rc genhtml_function_coverage=1 00:14:26.287 --rc genhtml_legend=1 00:14:26.287 --rc geninfo_all_blocks=1 00:14:26.287 --rc geninfo_unexecuted_blocks=1 00:14:26.287 00:14:26.287 ' 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:26.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.287 --rc genhtml_branch_coverage=1 00:14:26.287 --rc genhtml_function_coverage=1 00:14:26.287 --rc genhtml_legend=1 00:14:26.287 --rc geninfo_all_blocks=1 00:14:26.287 --rc geninfo_unexecuted_blocks=1 00:14:26.287 00:14:26.287 ' 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:26.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.287 --rc genhtml_branch_coverage=1 00:14:26.287 --rc genhtml_function_coverage=1 00:14:26.287 --rc genhtml_legend=1 00:14:26.287 --rc geninfo_all_blocks=1 00:14:26.287 --rc geninfo_unexecuted_blocks=1 00:14:26.287 00:14:26.287 ' 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:26.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.287 --rc genhtml_branch_coverage=1 00:14:26.287 --rc genhtml_function_coverage=1 00:14:26.287 --rc genhtml_legend=1 00:14:26.287 --rc geninfo_all_blocks=1 00:14:26.287 --rc geninfo_unexecuted_blocks=1 00:14:26.287 00:14:26.287 ' 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:26.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:26.287 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:26.288 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:26.288 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:26.288 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:14:26.288 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:34.429 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:34.430 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:34.430 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:34.430 Found net devices under 0000:31:00.0: cvl_0_0 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:34.430 Found net devices under 0000:31:00.1: cvl_0_1 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@247 -- # create_target_ns 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:34.430 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:34.431 10.0.0.1 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:34.431 10.0.0.2 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:34.431 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:34.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:34.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.690 ms 00:14:34.431 00:14:34.431 --- 10.0.0.1 ping statistics --- 00:14:34.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.431 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:34.431 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:34.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:34.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:14:34.432 00:14:34.432 --- 10.0.0.2 ping statistics --- 00:14:34.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.432 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:34.432 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=1850219 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 1850219 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1850219 ']' 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.693 08:11:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:34.693 [2024-11-20 08:11:39.314141] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:14:34.693 [2024-11-20 08:11:39.314208] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.954 [2024-11-20 08:11:39.425215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:34.954 [2024-11-20 08:11:39.476327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.954 [2024-11-20 08:11:39.476378] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.954 [2024-11-20 08:11:39.476386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.954 [2024-11-20 08:11:39.476394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.954 [2024-11-20 08:11:39.476400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.954 [2024-11-20 08:11:39.478421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:34.954 [2024-11-20 08:11:39.478580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:34.954 [2024-11-20 08:11:39.478738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:34.954 [2024-11-20 08:11:39.478738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:35.527 [2024-11-20 08:11:40.206041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:35.527 Malloc0 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.527 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:35.788 [2024-11-20 08:11:40.283280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:14:35.788 { 00:14:35.788 "params": { 00:14:35.788 "name": "Nvme$subsystem", 00:14:35.788 "trtype": "$TEST_TRANSPORT", 00:14:35.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:35.788 "adrfam": "ipv4", 00:14:35.788 "trsvcid": "$NVMF_PORT", 00:14:35.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:35.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:35.788 "hdgst": ${hdgst:-false}, 00:14:35.788 "ddgst": ${ddgst:-false} 00:14:35.788 }, 00:14:35.788 "method": "bdev_nvme_attach_controller" 00:14:35.788 } 00:14:35.788 EOF 00:14:35.788 )") 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:14:35.788 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:14:35.788 "params": { 00:14:35.788 "name": "Nvme1", 00:14:35.788 "trtype": "tcp", 00:14:35.788 "traddr": "10.0.0.2", 00:14:35.788 "adrfam": "ipv4", 00:14:35.789 "trsvcid": "4420", 00:14:35.789 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.789 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:35.789 "hdgst": false, 00:14:35.789 "ddgst": false 00:14:35.789 }, 00:14:35.789 "method": "bdev_nvme_attach_controller" 00:14:35.789 }' 00:14:35.789 [2024-11-20 08:11:40.351000] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:14:35.789 [2024-11-20 08:11:40.351095] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850568 ] 00:14:35.789 [2024-11-20 08:11:40.438751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:35.789 [2024-11-20 08:11:40.483564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.789 [2024-11-20 08:11:40.483683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.789 [2024-11-20 08:11:40.483687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.050 I/O targets: 00:14:36.050 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:36.050 00:14:36.050 00:14:36.050 CUnit - A unit testing framework for C - Version 2.1-3 00:14:36.050 http://cunit.sourceforge.net/ 00:14:36.050 00:14:36.050 00:14:36.050 Suite: bdevio tests on: Nvme1n1 00:14:36.311 Test: blockdev write read block ...passed 00:14:36.311 Test: blockdev write zeroes read block ...passed 00:14:36.311 Test: blockdev write zeroes read no split ...passed 00:14:36.311 Test: blockdev write zeroes read split ...passed 00:14:36.311 Test: blockdev write zeroes read split partial ...passed 00:14:36.311 Test: blockdev reset ...[2024-11-20 08:11:40.925257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:36.311 [2024-11-20 08:11:40.925325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15014b0 (9): Bad file descriptor 00:14:36.572 [2024-11-20 08:11:41.073716] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:36.572 passed 00:14:36.572 Test: blockdev write read 8 blocks ...passed 00:14:36.572 Test: blockdev write read size > 128k ...passed 00:14:36.572 Test: blockdev write read invalid size ...passed 00:14:36.572 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:36.572 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:36.572 Test: blockdev write read max offset ...passed 00:14:36.572 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:36.572 Test: blockdev writev readv 8 blocks ...passed 00:14:36.572 Test: blockdev writev readv 30 x 1block ...passed 00:14:36.572 Test: blockdev writev readv block ...passed 00:14:36.572 Test: blockdev writev readv size > 128k ...passed 00:14:36.572 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:36.572 Test: blockdev comparev and writev ...[2024-11-20 08:11:41.297964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.572 [2024-11-20 08:11:41.297989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:36.572 [2024-11-20 08:11:41.298001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.572 [2024-11-20 08:11:41.298007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:36.572 [2024-11-20 08:11:41.298482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.572 [2024-11-20 08:11:41.298490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:36.572 [2024-11-20 08:11:41.298500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.572 [2024-11-20 08:11:41.298506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:36.833 [2024-11-20 08:11:41.298953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.833 [2024-11-20 08:11:41.298962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:36.833 [2024-11-20 08:11:41.298977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.833 [2024-11-20 08:11:41.298982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:36.833 [2024-11-20 08:11:41.299429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.833 [2024-11-20 08:11:41.299436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:36.833 [2024-11-20 08:11:41.299446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.833 [2024-11-20 08:11:41.299451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:36.833 passed 00:14:36.833 Test: blockdev nvme passthru rw ...passed 00:14:36.833 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:11:41.383768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:36.833 [2024-11-20 08:11:41.383779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:36.833 [2024-11-20 08:11:41.384106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:36.833 [2024-11-20 08:11:41.384113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:36.833 [2024-11-20 08:11:41.384464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:36.833 [2024-11-20 08:11:41.384471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:36.833 [2024-11-20 08:11:41.384813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:36.833 [2024-11-20 08:11:41.384821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:36.833 passed 00:14:36.833 Test: blockdev nvme admin passthru ...passed 00:14:36.833 Test: blockdev copy ...passed 00:14:36.833 00:14:36.833 Run Summary: Type Total Ran Passed Failed Inactive 00:14:36.833 suites 1 1 n/a 0 0 00:14:36.833 tests 23 23 23 0 0 00:14:36.833 asserts 152 152 152 0 n/a 00:14:36.833 00:14:36.833 Elapsed time = 1.375 seconds 00:14:36.833 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:36.833 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.833 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:36.833 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.833 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:36.833 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:36.833 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:36.833 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:14:36.833 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:36.833 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:14:36.833 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:36.833 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:37.095 rmmod nvme_tcp 00:14:37.095 rmmod nvme_fabrics 00:14:37.095 rmmod nvme_keyring 00:14:37.095 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:37.095 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:14:37.095 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:14:37.095 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 1850219 ']' 00:14:37.095 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 1850219 00:14:37.095 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1850219 ']' 00:14:37.095 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1850219 00:14:37.095 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:14:37.095 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.095 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1850219 00:14:37.095 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:37.095 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:37.095 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1850219' 00:14:37.095 killing process with pid 1850219 00:14:37.095 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1850219 00:14:37.095 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1850219 00:14:37.356 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:37.356 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:14:37.356 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@254 -- # local dev 00:14:37.356 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:37.356 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:37.356 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:37.356 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # return 0 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:14:39.270 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:14:39.271 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@274 -- # iptr 00:14:39.271 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-save 00:14:39.271 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:39.271 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-restore 00:14:39.271 00:14:39.271 real 0m13.399s 00:14:39.271 user 0m14.498s 00:14:39.271 sys 0m6.953s 00:14:39.271 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.271 08:11:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:39.271 ************************************ 00:14:39.271 END TEST nvmf_bdevio 00:14:39.271 ************************************ 00:14:39.532 08:11:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # [[ tcp == \t\c\p ]] 00:14:39.532 08:11:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:39.532 08:11:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:39.532 08:11:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.532 08:11:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:39.532 ************************************ 00:14:39.532 START TEST nvmf_target_multipath 00:14:39.532 ************************************ 00:14:39.532 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:39.532 * Looking for test storage... 00:14:39.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:39.532 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:39.532 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:14:39.532 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:39.532 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:39.532 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:39.532 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:39.532 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:39.532 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:14:39.532 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:14:39.532 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:39.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.533 --rc genhtml_branch_coverage=1 00:14:39.533 --rc genhtml_function_coverage=1 00:14:39.533 --rc genhtml_legend=1 00:14:39.533 --rc geninfo_all_blocks=1 00:14:39.533 --rc geninfo_unexecuted_blocks=1 00:14:39.533 00:14:39.533 ' 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:39.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.533 --rc genhtml_branch_coverage=1 00:14:39.533 --rc genhtml_function_coverage=1 00:14:39.533 --rc genhtml_legend=1 00:14:39.533 --rc geninfo_all_blocks=1 00:14:39.533 --rc geninfo_unexecuted_blocks=1 00:14:39.533 00:14:39.533 ' 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:39.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.533 --rc genhtml_branch_coverage=1 00:14:39.533 --rc genhtml_function_coverage=1 00:14:39.533 --rc genhtml_legend=1 00:14:39.533 --rc geninfo_all_blocks=1 00:14:39.533 --rc geninfo_unexecuted_blocks=1 00:14:39.533 00:14:39.533 ' 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:39.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.533 --rc genhtml_branch_coverage=1 00:14:39.533 --rc genhtml_function_coverage=1 00:14:39.533 --rc genhtml_legend=1 00:14:39.533 --rc geninfo_all_blocks=1 00:14:39.533 --rc geninfo_unexecuted_blocks=1 00:14:39.533 00:14:39.533 ' 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.533 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@50 -- # : 0 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:39.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # xtrace_disable 00:14:39.795 08:11:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@131 -- # pci_devs=() 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@135 -- # net_devs=() 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@136 -- # e810=() 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@136 -- # local -ga e810 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@137 -- # x722=() 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@137 -- # local -ga x722 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@138 -- # mlx=() 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@138 -- # local -ga mlx 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:47.940 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:47.941 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:47.941 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:47.941 Found net devices under 0000:31:00.0: cvl_0_0 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:47.941 Found net devices under 0000:31:00.1: cvl_0_1 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # is_hw=yes 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@247 -- # create_target_ns 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:47.941 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:47.942 10.0.0.1 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:47.942 10.0.0.2 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:47.942 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:48.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.665 ms 00:14:48.204 00:14:48.204 --- 10.0.0.1 ping statistics --- 00:14:48.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.204 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:48.204 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:48.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:14:48.205 00:14:48.205 --- 10.0.0.2 ping statistics --- 00:14:48.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.205 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # return 0 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:14:48.205 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:48.206 only one NIC for nvmf test 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:48.206 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:48.206 rmmod nvme_tcp 00:14:48.206 rmmod nvme_fabrics 00:14:48.467 rmmod nvme_keyring 00:14:48.467 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:48.467 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:14:48.467 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:14:48.467 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:14:48.467 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:48.467 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:14:48.467 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:14:48.467 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:48.467 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:48.467 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:48.467 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:14:50.383 00:14:50.383 real 0m11.062s 00:14:50.383 user 0m2.464s 00:14:50.383 sys 0m6.540s 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.383 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:50.383 ************************************ 00:14:50.383 END TEST nvmf_target_multipath 00:14:50.383 ************************************ 00:14:50.644 08:11:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:50.644 08:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:50.644 08:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.644 08:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:50.644 ************************************ 00:14:50.644 START TEST nvmf_zcopy 00:14:50.644 ************************************ 00:14:50.644 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:50.644 * Looking for test storage... 00:14:50.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.644 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.645 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:50.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.906 --rc genhtml_branch_coverage=1 00:14:50.906 --rc genhtml_function_coverage=1 00:14:50.906 --rc genhtml_legend=1 00:14:50.906 --rc geninfo_all_blocks=1 00:14:50.906 --rc geninfo_unexecuted_blocks=1 00:14:50.906 00:14:50.906 ' 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:50.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.906 --rc genhtml_branch_coverage=1 00:14:50.906 --rc genhtml_function_coverage=1 00:14:50.906 --rc genhtml_legend=1 00:14:50.906 --rc geninfo_all_blocks=1 00:14:50.906 --rc geninfo_unexecuted_blocks=1 00:14:50.906 00:14:50.906 ' 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:50.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.906 --rc genhtml_branch_coverage=1 00:14:50.906 --rc genhtml_function_coverage=1 00:14:50.906 --rc genhtml_legend=1 00:14:50.906 --rc geninfo_all_blocks=1 00:14:50.906 --rc geninfo_unexecuted_blocks=1 00:14:50.906 00:14:50.906 ' 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:50.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.906 --rc genhtml_branch_coverage=1 00:14:50.906 --rc genhtml_function_coverage=1 00:14:50.906 --rc genhtml_legend=1 00:14:50.906 --rc geninfo_all_blocks=1 00:14:50.906 --rc geninfo_unexecuted_blocks=1 00:14:50.906 00:14:50.906 ' 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.906 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:50.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:14:50.907 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:59.162 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:59.162 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:59.162 Found net devices under 0000:31:00.0: cvl_0_0 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:59.162 Found net devices under 0000:31:00.1: cvl_0_1 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@247 -- # create_target_ns 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:59.162 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:59.163 10.0.0.1 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:59.163 10.0.0.2 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:59.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.670 ms 00:14:59.163 00:14:59.163 --- 10.0.0.1 ping statistics --- 00:14:59.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.163 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:59.163 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:59.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:14:59.164 00:14:59.164 --- 10.0.0.2 ping statistics --- 00:14:59.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.164 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:59.164 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:14:59.426 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=1860545 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 1860545 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1860545 ']' 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.427 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:59.427 [2024-11-20 08:12:04.063354] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:14:59.427 [2024-11-20 08:12:04.063422] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.689 [2024-11-20 08:12:04.175549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.689 [2024-11-20 08:12:04.226693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.689 [2024-11-20 08:12:04.226745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.689 [2024-11-20 08:12:04.226755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.689 [2024-11-20 08:12:04.226762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.689 [2024-11-20 08:12:04.226769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.689 [2024-11-20 08:12:04.227554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:00.262 [2024-11-20 08:12:04.917520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@20 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:00.262 [2024-11-20 08:12:04.933818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:00.262 malloc0 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@28 -- # gen_nvmf_target_json 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:15:00.262 { 00:15:00.262 "params": { 00:15:00.262 "name": "Nvme$subsystem", 00:15:00.262 "trtype": "$TEST_TRANSPORT", 00:15:00.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:00.262 "adrfam": "ipv4", 00:15:00.262 "trsvcid": "$NVMF_PORT", 00:15:00.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:00.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:00.262 "hdgst": ${hdgst:-false}, 00:15:00.262 "ddgst": ${ddgst:-false} 00:15:00.262 }, 00:15:00.262 "method": "bdev_nvme_attach_controller" 00:15:00.262 } 00:15:00.262 EOF 00:15:00.262 )") 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:15:00.262 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:15:00.262 "params": { 00:15:00.262 "name": "Nvme1", 00:15:00.262 "trtype": "tcp", 00:15:00.262 "traddr": "10.0.0.2", 00:15:00.262 "adrfam": "ipv4", 00:15:00.262 "trsvcid": "4420", 00:15:00.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:00.262 "hdgst": false, 00:15:00.262 "ddgst": false 00:15:00.262 }, 00:15:00.262 "method": "bdev_nvme_attach_controller" 00:15:00.262 }' 00:15:00.523 [2024-11-20 08:12:05.026301] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:15:00.523 [2024-11-20 08:12:05.026382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1860786 ] 00:15:00.523 [2024-11-20 08:12:05.112783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.523 [2024-11-20 08:12:05.154976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.783 Running I/O for 10 seconds... 00:15:03.109 6441.00 IOPS, 50.32 MiB/s [2024-11-20T07:12:08.779Z] 7311.00 IOPS, 57.12 MiB/s [2024-11-20T07:12:09.722Z] 8035.67 IOPS, 62.78 MiB/s [2024-11-20T07:12:10.665Z] 8395.50 IOPS, 65.59 MiB/s [2024-11-20T07:12:11.623Z] 8606.80 IOPS, 67.24 MiB/s [2024-11-20T07:12:12.563Z] 8747.83 IOPS, 68.34 MiB/s [2024-11-20T07:12:13.504Z] 8850.29 IOPS, 69.14 MiB/s [2024-11-20T07:12:14.889Z] 8930.38 IOPS, 69.77 MiB/s [2024-11-20T07:12:15.834Z] 8990.89 IOPS, 70.24 MiB/s [2024-11-20T07:12:15.834Z] 9039.90 IOPS, 70.62 MiB/s 00:15:11.105 Latency(us) 00:15:11.105 [2024-11-20T07:12:15.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.105 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:11.105 Verification LBA range: start 0x0 length 0x1000 00:15:11.105 Nvme1n1 : 10.01 9040.96 70.63 0.00 0.00 14106.32 1856.85 27852.80 00:15:11.105 [2024-11-20T07:12:15.834Z] =================================================================================================================== 00:15:11.105 [2024-11-20T07:12:15.834Z] Total : 9040.96 70.63 0.00 0.00 14106.32 1856.85 27852.80 00:15:11.105 08:12:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@34 -- # perfpid=1863245 00:15:11.105 08:12:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@36 -- # xtrace_disable 00:15:11.105 08:12:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:11.105 08:12:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:11.105 08:12:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@32 -- # gen_nvmf_target_json 00:15:11.105 08:12:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:15:11.105 08:12:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:15:11.105 08:12:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:15:11.105 08:12:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:15:11.105 { 00:15:11.105 "params": { 00:15:11.105 "name": "Nvme$subsystem", 00:15:11.105 "trtype": "$TEST_TRANSPORT", 00:15:11.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:11.105 "adrfam": "ipv4", 00:15:11.105 "trsvcid": "$NVMF_PORT", 00:15:11.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:11.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:11.105 "hdgst": ${hdgst:-false}, 00:15:11.105 "ddgst": ${ddgst:-false} 00:15:11.105 }, 00:15:11.105 "method": "bdev_nvme_attach_controller" 00:15:11.105 } 00:15:11.105 EOF 00:15:11.105 )") 00:15:11.105 [2024-11-20 08:12:15.619738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.105 [2024-11-20 08:12:15.619768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.105 08:12:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:15:11.105 08:12:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:15:11.105 [2024-11-20 08:12:15.627722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.105 [2024-11-20 08:12:15.627731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.105 08:12:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:15:11.105 08:12:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:15:11.105 "params": { 00:15:11.105 "name": "Nvme1", 00:15:11.105 "trtype": "tcp", 00:15:11.105 "traddr": "10.0.0.2", 00:15:11.105 "adrfam": "ipv4", 00:15:11.105 "trsvcid": "4420", 00:15:11.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.105 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:11.105 "hdgst": false, 00:15:11.105 "ddgst": false 00:15:11.105 }, 00:15:11.105 "method": "bdev_nvme_attach_controller" 00:15:11.105 }' 00:15:11.105 [2024-11-20 08:12:15.635741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.105 [2024-11-20 08:12:15.635748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.105 [2024-11-20 08:12:15.643761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.105 [2024-11-20 08:12:15.643768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.105 [2024-11-20 08:12:15.651782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.105 [2024-11-20 08:12:15.651789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.105 [2024-11-20 08:12:15.662778] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:15:11.106 [2024-11-20 08:12:15.662826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1863245 ] 00:15:11.106 [2024-11-20 08:12:15.663814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.663822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.671834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.671841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.679854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.679861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.687878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.687885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.695898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.695905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.703916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.703923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.711936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.711943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.719956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.719962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.727975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.727983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.735995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.736002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.739111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.106 [2024-11-20 08:12:15.744017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.744024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.752038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.752046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.760058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.760066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.768078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.768088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.774334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.106 [2024-11-20 08:12:15.776098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.776106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.784120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.784128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.792143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.792153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.800161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.800171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.808184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.808196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.816201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.816210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.106 [2024-11-20 08:12:15.824220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.106 [2024-11-20 08:12:15.824229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.832240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.832248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.840259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.840266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.848298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.848316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.856303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.856313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.864324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.864333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.872349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.872359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.880368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.880377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.888388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.888396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.896407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.896414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.904427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.904434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.912446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.912453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.920466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.920473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.928491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.928500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.936509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.936517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.944530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.944537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.952553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.952560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.960573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.960581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.367 [2024-11-20 08:12:15.968595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.367 [2024-11-20 08:12:15.968603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.368 [2024-11-20 08:12:15.976616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.368 [2024-11-20 08:12:15.976625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.368 [2024-11-20 08:12:15.984635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.368 [2024-11-20 08:12:15.984642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.368 [2024-11-20 08:12:15.992655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.368 [2024-11-20 08:12:15.992662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.368 [2024-11-20 08:12:16.000676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.368 [2024-11-20 08:12:16.000683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.368 [2024-11-20 08:12:16.008697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.368 [2024-11-20 08:12:16.008704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.368 [2024-11-20 08:12:16.016717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.368 [2024-11-20 08:12:16.016725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.368 [2024-11-20 08:12:16.024818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.368 [2024-11-20 08:12:16.024833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.368 [2024-11-20 08:12:16.032763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.368 [2024-11-20 08:12:16.032773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.368 Running I/O for 5 seconds... 00:15:11.368 [2024-11-20 08:12:16.040782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.368 [2024-11-20 08:12:16.040790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.368 [2024-11-20 08:12:16.051633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.368 [2024-11-20 08:12:16.051650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.368 [2024-11-20 08:12:16.059909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.368 [2024-11-20 08:12:16.059926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.368 [2024-11-20 08:12:16.068566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.368 [2024-11-20 08:12:16.068582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.368 [2024-11-20 08:12:16.077885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.368 [2024-11-20 08:12:16.077901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.368 [2024-11-20 08:12:16.086952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.368 [2024-11-20 08:12:16.086968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.630 [2024-11-20 08:12:16.095180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.630 [2024-11-20 08:12:16.095196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.630 [2024-11-20 08:12:16.104012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.630 [2024-11-20 08:12:16.104027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.630 [2024-11-20 08:12:16.112131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.630 [2024-11-20 08:12:16.112147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.630 [2024-11-20 08:12:16.121192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.630 [2024-11-20 08:12:16.121207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.630 [2024-11-20 08:12:16.130067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.630 [2024-11-20 08:12:16.130082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.630 [2024-11-20 08:12:16.139147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.630 [2024-11-20 08:12:16.139167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.630 [2024-11-20 08:12:16.147849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.630 [2024-11-20 08:12:16.147869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.630 [2024-11-20 08:12:16.156374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.630 [2024-11-20 08:12:16.156390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.630 [2024-11-20 08:12:16.165270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.630 [2024-11-20 08:12:16.165285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.630 [2024-11-20 08:12:16.173364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.630 [2024-11-20 08:12:16.173378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.630 [2024-11-20 08:12:16.182049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.630 [2024-11-20 08:12:16.182064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.630 [2024-11-20 08:12:16.190802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.630 [2024-11-20 08:12:16.190817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.199298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.199312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.208341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.208355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.217024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.217039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.226164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.226179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.234908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.234922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.243851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.243872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.252700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.252715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.261137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.261152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.270265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.270280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.279364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.279379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.288357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.288371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.297099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.297114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.306117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.306135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.315320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.315336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.324346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.324361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.333093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.333108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.341642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.341656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.631 [2024-11-20 08:12:16.350645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.631 [2024-11-20 08:12:16.350660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.359671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.359686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.368406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.368421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.376885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.376899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.385924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.385939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.395238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.395253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.403887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.403902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.411856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.411874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.420556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.420570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.429527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.429542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.437462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.437476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.445989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.446003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.454578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.454592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.463269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.463283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.472125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.472145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.481380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.481394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.490088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.490102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.498719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.498733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.507581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.507595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.516827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.516842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.525829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.525843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.534592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.534606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.543602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.543617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.551881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.551896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.560957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.560971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.569761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.569776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.893 [2024-11-20 08:12:16.578639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.893 [2024-11-20 08:12:16.578653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.894 [2024-11-20 08:12:16.588042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.894 [2024-11-20 08:12:16.588056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.894 [2024-11-20 08:12:16.597383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.894 [2024-11-20 08:12:16.597398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.894 [2024-11-20 08:12:16.605952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.894 [2024-11-20 08:12:16.605967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.894 [2024-11-20 08:12:16.615308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.894 [2024-11-20 08:12:16.615322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.624261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.624275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.633060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.633074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.641676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.641690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.650388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.650402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.658874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.658888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.667967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.667981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.677109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.677124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.686098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.686112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.695222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.695237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.704057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.704071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.713033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.713048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.721775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.721789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.730224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.730238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.738911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.738925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.747769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.747783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.756388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.756403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.765454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.765469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.774809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.774824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.783943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.154 [2024-11-20 08:12:16.783958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.154 [2024-11-20 08:12:16.792904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.155 [2024-11-20 08:12:16.792918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.155 [2024-11-20 08:12:16.801545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.155 [2024-11-20 08:12:16.801560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.155 [2024-11-20 08:12:16.810626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.155 [2024-11-20 08:12:16.810642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.155 [2024-11-20 08:12:16.819688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.155 [2024-11-20 08:12:16.819702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.155 [2024-11-20 08:12:16.828757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.155 [2024-11-20 08:12:16.828772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.155 [2024-11-20 08:12:16.837781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.155 [2024-11-20 08:12:16.837795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.155 [2024-11-20 08:12:16.846691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.155 [2024-11-20 08:12:16.846705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.155 [2024-11-20 08:12:16.855253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.155 [2024-11-20 08:12:16.855267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.155 [2024-11-20 08:12:16.864485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.155 [2024-11-20 08:12:16.864499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.155 [2024-11-20 08:12:16.873346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.155 [2024-11-20 08:12:16.873360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:16.882143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:16.882158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:16.891644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:16.891658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:16.899834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:16.899849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:16.907740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:16.907754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:16.916659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:16.916673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:16.925822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:16.925837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:16.934877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:16.934892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:16.943754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:16.943768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:16.952702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:16.952716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:16.960900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:16.960914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:16.970032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:16.970046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:16.979097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:16.979111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:16.988011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:16.988025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:16.996867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:16.996881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:17.005511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:17.005525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:17.014576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:17.014590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:17.023498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:17.023512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:17.032267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:17.032281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:17.041157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:17.041171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 18459.00 IOPS, 144.21 MiB/s [2024-11-20T07:12:17.144Z] [2024-11-20 08:12:17.050207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:17.050221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:17.059470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:17.059484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:17.068442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:17.068457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:17.077598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:17.077613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:17.086991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:17.087005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:17.095594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:17.095608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:17.104591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:17.104606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:17.113637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:17.113651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:17.122953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:17.122967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:17.131830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.415 [2024-11-20 08:12:17.131844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.415 [2024-11-20 08:12:17.140741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.416 [2024-11-20 08:12:17.140760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.149602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.149617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.158318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.158333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.167361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.167376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.176547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.176562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.185203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.185218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.194427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.194442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.203238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.203252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.212312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.212327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.221628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.221643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.231065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.231080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.239182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.239197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.247569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.247584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.256348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.256362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.264841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.264856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.273609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.273623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.282311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.282325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.291137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.291151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.299500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.299514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.308662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.308680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.316799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.316813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.325412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.325427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.333892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.333907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.342793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.342807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.350883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.350897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.360052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.360067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.368852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.368870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.376989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.377003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.385587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.385602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.678 [2024-11-20 08:12:17.394231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.678 [2024-11-20 08:12:17.394245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.679 [2024-11-20 08:12:17.403315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.679 [2024-11-20 08:12:17.403329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.412314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.412329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.421312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.421327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.430529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.430544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.439465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.439479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.448281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.448295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.457230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.457244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.466432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.466447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.475344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.475363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.484802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.484817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.493547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.493561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.502707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.502722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.511503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.511518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.520560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.520575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.528758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.528772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.537434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.537449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.546228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.546243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.554677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.554691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.563578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.563593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.941 [2024-11-20 08:12:17.571715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.941 [2024-11-20 08:12:17.571730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.942 [2024-11-20 08:12:17.580507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.942 [2024-11-20 08:12:17.580522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.942 [2024-11-20 08:12:17.589546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.942 [2024-11-20 08:12:17.589561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.942 [2024-11-20 08:12:17.598306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.942 [2024-11-20 08:12:17.598321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.942 [2024-11-20 08:12:17.606855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.942 [2024-11-20 08:12:17.606875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.942 [2024-11-20 08:12:17.615860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.942 [2024-11-20 08:12:17.615879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.942 [2024-11-20 08:12:17.624825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.942 [2024-11-20 08:12:17.624840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.942 [2024-11-20 08:12:17.633971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.942 [2024-11-20 08:12:17.633986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.942 [2024-11-20 08:12:17.642079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.942 [2024-11-20 08:12:17.642097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.942 [2024-11-20 08:12:17.651225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.942 [2024-11-20 08:12:17.651240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.942 [2024-11-20 08:12:17.660307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.942 [2024-11-20 08:12:17.660321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.668682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.668697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.678005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.678020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.686623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.686638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.695160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.695174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.704380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.704395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.713270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.713285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.721925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.721940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.731037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.731052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.740041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.740056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.748777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.748792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.757735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.757750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.765950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.765964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.774964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.774979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.784150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.784165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.792826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.792841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.801823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.801837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.810129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.810143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.819105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.819119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.827663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.827678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.836617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.836631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.845099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.845113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.853731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.853745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.862729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.862744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.871366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.871381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.880379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.880394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.889484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.889499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.898214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.898228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.907028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.907042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.916153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.916167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.204 [2024-11-20 08:12:17.924879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.204 [2024-11-20 08:12:17.924893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:17.933937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:17.933952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:17.942562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:17.942576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:17.951551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:17.951566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:17.960303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:17.960318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:17.969654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:17.969669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:17.978417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:17.978432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:17.987563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:17.987578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:17.996679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:17.996693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:18.005667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:18.005682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:18.014969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:18.014983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:18.024036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:18.024051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:18.032946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:18.032960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:18.041817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:18.041831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 18516.50 IOPS, 144.66 MiB/s [2024-11-20T07:12:18.195Z] [2024-11-20 08:12:18.049980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:18.049995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:18.058721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:18.058735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:18.067915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:18.067930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:18.075980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:18.075995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:18.084491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:18.084506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:18.093442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:18.093456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.466 [2024-11-20 08:12:18.102197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.466 [2024-11-20 08:12:18.102212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.467 [2024-11-20 08:12:18.111200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.467 [2024-11-20 08:12:18.111214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.467 [2024-11-20 08:12:18.120684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.467 [2024-11-20 08:12:18.120698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.467 [2024-11-20 08:12:18.128878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.467 [2024-11-20 08:12:18.128892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.467 [2024-11-20 08:12:18.138167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.467 [2024-11-20 08:12:18.138181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.467 [2024-11-20 08:12:18.146926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.467 [2024-11-20 08:12:18.146940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.467 [2024-11-20 08:12:18.155703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.467 [2024-11-20 08:12:18.155717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.467 [2024-11-20 08:12:18.164794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.467 [2024-11-20 08:12:18.164808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.467 [2024-11-20 08:12:18.173917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.467 [2024-11-20 08:12:18.173931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.467 [2024-11-20 08:12:18.182860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.467 [2024-11-20 08:12:18.182879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.467 [2024-11-20 08:12:18.190956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.467 [2024-11-20 08:12:18.190971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.729 [2024-11-20 08:12:18.200092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.729 [2024-11-20 08:12:18.200107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.729 [2024-11-20 08:12:18.208852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.729 [2024-11-20 08:12:18.208872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.729 [2024-11-20 08:12:18.217312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.217327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.226339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.226354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.235528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.235543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.244341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.244356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.253331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.253346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.261795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.261809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.270647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.270661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.279111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.279125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.287989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.288003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.296600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.296614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.305741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.305760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.314498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.314512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.323909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.323924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.332699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.332713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.341758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.341773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.350658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.350673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.359625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.359639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.368381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.368395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.377021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.377036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.385648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.385662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.394078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.394093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.403337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.403351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.412025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.412040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.421229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.421243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.429497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.429512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.438668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.438683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.730 [2024-11-20 08:12:18.447999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.730 [2024-11-20 08:12:18.448013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.992 [2024-11-20 08:12:18.457133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.992 [2024-11-20 08:12:18.457148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.992 [2024-11-20 08:12:18.466226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.992 [2024-11-20 08:12:18.466241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.992 [2024-11-20 08:12:18.474897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.474916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.483490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.483504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.492598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.492612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.501357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.501372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.509968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.509982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.518349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.518363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.527456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.527471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.536492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.536507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.545355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.545369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.554288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.554302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.563397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.563412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.572735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.572750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.581357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.581370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.590136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.590150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.599144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.599158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.608105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.608119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.616721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.616735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.625808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.625822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.634491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.634505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.642710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.642728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.651042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.651056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.659914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.659928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.668467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.668481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.677423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.677437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.685586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.685600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.694282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.694296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.702474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.702488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.993 [2024-11-20 08:12:18.711583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:13.993 [2024-11-20 08:12:18.711597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.254 [2024-11-20 08:12:18.720374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.254 [2024-11-20 08:12:18.720388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.254 [2024-11-20 08:12:18.729247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.254 [2024-11-20 08:12:18.729262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.254 [2024-11-20 08:12:18.737619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.254 [2024-11-20 08:12:18.737633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.254 [2024-11-20 08:12:18.746185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.254 [2024-11-20 08:12:18.746200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.254 [2024-11-20 08:12:18.755183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.254 [2024-11-20 08:12:18.755196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.254 [2024-11-20 08:12:18.764088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.254 [2024-11-20 08:12:18.764102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.254 [2024-11-20 08:12:18.772149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.254 [2024-11-20 08:12:18.772163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.781201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.781216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.789937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.789951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.798624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.798638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.807274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.807292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.816401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.816415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.825004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.825018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.834280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.834294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.843212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.843226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.851844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.851858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.860928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.860943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.869622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.869636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.878462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.878476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.887461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.887476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.896053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.896068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.905227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.905242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.913672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.913686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.922791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.922805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.931966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.931980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.941167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.941181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.949844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.949858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.958482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.958497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.967491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.967506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.255 [2024-11-20 08:12:18.976701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.255 [2024-11-20 08:12:18.976716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.516 [2024-11-20 08:12:18.985814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.516 [2024-11-20 08:12:18.985829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.516 [2024-11-20 08:12:18.993905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.516 [2024-11-20 08:12:18.993919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.516 [2024-11-20 08:12:19.002981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.516 [2024-11-20 08:12:19.002995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.516 [2024-11-20 08:12:19.011265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.516 [2024-11-20 08:12:19.011279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.020327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.020341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.029205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.029220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.037733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.037747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.047071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.047086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 18559.67 IOPS, 145.00 MiB/s [2024-11-20T07:12:19.246Z] [2024-11-20 08:12:19.056284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.056299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.064352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.064367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.073205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.073219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.081890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.081905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.091134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.091149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.099814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.099829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.108737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.108752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.118072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.118086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.126373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.126387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.135330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.135344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.144455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.144469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.153706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.153720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.162442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.162456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.170848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.170873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.179469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.179483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.188255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.188270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.196904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.196918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.205870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.205885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.213929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.213944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.222651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.222666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.231379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.231393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.517 [2024-11-20 08:12:19.239964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.517 [2024-11-20 08:12:19.239979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.249299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.249314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.257655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.257669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.266829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.266844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.275592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.275607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.284385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.284399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.293052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.293067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.302601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.302620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.310700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.310714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.319705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.319719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.328750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.328765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.337454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.337469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.346194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.346209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.355012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.355026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.363236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.363250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.372237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.372251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.381307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.381321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.389860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.389879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.398286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.398301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.407319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.407333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.416539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.416554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.425121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.425136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.434436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.434451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.443558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.443573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.452586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.452601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.460588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.460602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.469393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.469411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.477935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.477950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.486956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.486971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.495606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.495620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.779 [2024-11-20 08:12:19.504753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.779 [2024-11-20 08:12:19.504768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.513911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.513926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.523024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.523038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.531023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.531038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.540042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.540057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.549283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.549297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.558088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.558103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.566226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.566240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.575157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.575172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.584343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.584357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.592545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.592559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.601543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.601558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.610699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.610714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.619318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.619332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.628532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.628547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.637831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.637849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.646653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.646668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.655828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.655844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.664610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.664625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.673286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.673301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.682216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.682231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.690794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.690808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.699336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.699351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.708181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.708196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.716687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.716703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.725935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.725950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.734432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.734447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.743298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.743312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.752410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.752424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.041 [2024-11-20 08:12:19.761176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.041 [2024-11-20 08:12:19.761191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.770574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.770589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.779280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.779295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.788326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.788340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.797091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.797106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.805428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.805447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.814387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.814402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.823265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.823280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.831739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.831753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.840950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.840964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.849585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.849600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.858138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.858153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.867071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.867085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.876021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.876035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.884667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.884681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.893483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.893497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.902252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.902266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.915171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.915186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.923569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.923583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.932101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.932116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.940437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.940452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.949351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.949365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.958426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.958440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.967326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.967340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.976211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.976229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.985470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.985485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:19.994413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:19.994427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:20.003790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:20.003806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:20.013083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:20.013098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.303 [2024-11-20 08:12:20.022343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.303 [2024-11-20 08:12:20.022358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.564 [2024-11-20 08:12:20.030494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.564 [2024-11-20 08:12:20.030509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.564 [2024-11-20 08:12:20.039512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.564 [2024-11-20 08:12:20.039527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.564 [2024-11-20 08:12:20.048207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.564 [2024-11-20 08:12:20.048222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.564 18570.25 IOPS, 145.08 MiB/s [2024-11-20T07:12:20.293Z] [2024-11-20 08:12:20.056552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.564 [2024-11-20 08:12:20.056566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.564 [2024-11-20 08:12:20.065735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.564 [2024-11-20 08:12:20.065749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.564 [2024-11-20 08:12:20.074484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.564 [2024-11-20 08:12:20.074498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.564 [2024-11-20 08:12:20.082561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.564 [2024-11-20 08:12:20.082575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.564 [2024-11-20 08:12:20.091117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.564 [2024-11-20 08:12:20.091132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.564 [2024-11-20 08:12:20.100120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.564 [2024-11-20 08:12:20.100134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.564 [2024-11-20 08:12:20.108681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.564 [2024-11-20 08:12:20.108695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.564 [2024-11-20 08:12:20.117272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.564 [2024-11-20 08:12:20.117286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.564 [2024-11-20 08:12:20.126381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.564 [2024-11-20 08:12:20.126396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.564 [2024-11-20 08:12:20.135397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.564 [2024-11-20 08:12:20.135411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.144554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.144569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.152989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.153003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.162088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.162103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.170891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.170905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.179572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.179585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.188649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.188664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.197725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.197739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.206613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.206627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.215734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.215749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.225035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.225050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.234203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.234217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.243016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.243030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.251638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.251652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.260430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.260444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.269010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.269025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.277935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.277950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.565 [2024-11-20 08:12:20.287204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.565 [2024-11-20 08:12:20.287218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.826 [2024-11-20 08:12:20.296421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.296436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.305066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.305080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.314000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.314014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.322566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.322580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.331616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.331630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.340843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.340858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.350181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.350196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.359206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.359220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.368432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.368447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.377327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.377341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.386357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.386372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.394934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.394949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.403734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.403748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.412824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.412838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.421797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.421811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.430448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.430461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.439175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.439189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.448324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.448338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.457179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.457193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.466048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.466062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.474066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.474083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.482789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.482804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.490980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.490994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.499881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.499895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.508915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.508930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.518041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.518055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.527359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.527373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.536260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.536274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.827 [2024-11-20 08:12:20.544825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.827 [2024-11-20 08:12:20.544839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.553957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.553973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.562854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.562873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.571270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.571284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.580248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.580262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.589205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.589219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.598119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.598134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.607031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.607046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.616273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.616287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.625696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.625710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.634313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.634327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.642994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.643012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.652090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.652104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.660938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.660952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.669302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.669316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.678135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.678149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.687271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.687286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.696112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.696126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.704932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.704946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.713391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.713405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.721728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.721742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.730784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.730798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.739592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.739606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.748661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.748675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.757229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.757244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.765945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.765960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.775147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.775161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.784106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.784120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.792949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.792963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.801695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.801709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.089 [2024-11-20 08:12:20.811274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.089 [2024-11-20 08:12:20.811293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.819428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.819443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.828234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.828248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.836660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.836674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.845435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.845449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.853577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.853592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.862687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.862701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.871973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.871988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.880090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.880104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.889269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.889284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.898081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.898096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.907248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.907263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.915976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.915991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.924774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.924788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.933858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.933877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.942774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.942788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.951598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.951612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.960754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.960768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.968814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.968828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.977997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.978016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.986987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.987002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:20.995039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:20.995053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:21.003946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:21.003961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:21.012580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:21.012595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:21.021762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:21.021777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:21.030778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:21.030792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:21.039517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:21.039532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 [2024-11-20 08:12:21.048729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:21.048743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.352 18573.40 IOPS, 145.10 MiB/s [2024-11-20T07:12:21.081Z] [2024-11-20 08:12:21.055236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.352 [2024-11-20 08:12:21.055250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 [2024-11-20 08:12:21.097342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.614 [2024-11-20 08:12:21.097353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 00:15:16.614 Latency(us) 00:15:16.614 [2024-11-20T07:12:21.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.614 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:16.614 Nvme1n1 : 5.05 18424.97 143.95 0.00 0.00 6884.26 2689.71 48496.64 00:15:16.614 [2024-11-20T07:12:21.343Z] =================================================================================================================== 00:15:16.614 [2024-11-20T07:12:21.343Z] Total : 18424.97 143.95 0.00 0.00 6884.26 2689.71 48496.64 00:15:16.614 [2024-11-20 08:12:21.103120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.614 [2024-11-20 08:12:21.103132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 [2024-11-20 08:12:21.111139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.614 [2024-11-20 08:12:21.111150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 [2024-11-20 08:12:21.119158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.614 [2024-11-20 08:12:21.119167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 [2024-11-20 08:12:21.127181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.614 [2024-11-20 08:12:21.127193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 [2024-11-20 08:12:21.135199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.614 [2024-11-20 08:12:21.135210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 [2024-11-20 08:12:21.143221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.614 [2024-11-20 08:12:21.143231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 [2024-11-20 08:12:21.151241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.614 [2024-11-20 08:12:21.151250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 [2024-11-20 08:12:21.159259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.614 [2024-11-20 08:12:21.159268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 [2024-11-20 08:12:21.167278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.614 [2024-11-20 08:12:21.167287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 [2024-11-20 08:12:21.175298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.614 [2024-11-20 08:12:21.175306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 [2024-11-20 08:12:21.183318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.614 [2024-11-20 08:12:21.183326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 [2024-11-20 08:12:21.191340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.614 [2024-11-20 08:12:21.191350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 [2024-11-20 08:12:21.199358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.614 [2024-11-20 08:12:21.199365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 [2024-11-20 08:12:21.207378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.614 [2024-11-20 08:12:21.207386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 37: kill: (1863245) - No such process 00:15:16.614 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@44 -- # wait 1863245 00:15:16.614 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@47 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.615 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.615 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:16.615 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.615 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@48 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:16.615 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.615 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:16.615 delay0 00:15:16.615 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.615 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:16.615 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.615 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:16.615 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.615 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:16.877 [2024-11-20 08:12:21.353027] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:25.052 Initializing NVMe Controllers 00:15:25.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:25.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:25.052 Initialization complete. Launching workers. 00:15:25.052 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 316, failed: 8569 00:15:25.052 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 8836, failed to submit 49 00:15:25.052 success 8629, unsuccessful 207, failed 0 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@55 -- # nvmftestfini 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:25.052 rmmod nvme_tcp 00:15:25.052 rmmod nvme_fabrics 00:15:25.052 rmmod nvme_keyring 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 1860545 ']' 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 1860545 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1860545 ']' 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1860545 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1860545 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1860545' 00:15:25.052 killing process with pid 1860545 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1860545 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1860545 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@254 -- # local dev 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@257 -- # remove_target_ns 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:25.052 08:12:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@258 -- # delete_main_bridge 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # return 0 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@274 -- # iptr 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-save 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-restore 00:15:26.439 00:15:26.439 real 0m35.717s 00:15:26.439 user 0m47.111s 00:15:26.439 sys 0m12.159s 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:26.439 ************************************ 00:15:26.439 END TEST nvmf_zcopy 00:15:26.439 ************************************ 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # trap - SIGINT SIGTERM EXIT 00:15:26.439 00:15:26.439 real 5m17.276s 00:15:26.439 user 11m45.915s 00:15:26.439 sys 1m59.258s 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.439 08:12:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:26.440 ************************************ 00:15:26.440 END TEST nvmf_target_core 00:15:26.440 ************************************ 00:15:26.440 08:12:30 nvmf_tcp -- nvmf/nvmf.sh@11 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:26.440 08:12:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:26.440 08:12:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.440 08:12:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:26.440 ************************************ 00:15:26.440 START TEST nvmf_target_extra 00:15:26.440 ************************************ 00:15:26.440 08:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:26.440 * Looking for test storage... 00:15:26.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:15:26.440 08:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:26.440 08:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:15:26.440 08:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:26.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.702 --rc genhtml_branch_coverage=1 00:15:26.702 --rc genhtml_function_coverage=1 00:15:26.702 --rc genhtml_legend=1 00:15:26.702 --rc geninfo_all_blocks=1 00:15:26.702 --rc geninfo_unexecuted_blocks=1 00:15:26.702 00:15:26.702 ' 00:15:26.702 08:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:26.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.702 --rc genhtml_branch_coverage=1 00:15:26.702 --rc genhtml_function_coverage=1 00:15:26.702 --rc genhtml_legend=1 00:15:26.702 --rc geninfo_all_blocks=1 00:15:26.703 --rc geninfo_unexecuted_blocks=1 00:15:26.703 00:15:26.703 ' 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:26.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.703 --rc genhtml_branch_coverage=1 00:15:26.703 --rc genhtml_function_coverage=1 00:15:26.703 --rc genhtml_legend=1 00:15:26.703 --rc geninfo_all_blocks=1 00:15:26.703 --rc geninfo_unexecuted_blocks=1 00:15:26.703 00:15:26.703 ' 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:26.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.703 --rc genhtml_branch_coverage=1 00:15:26.703 --rc genhtml_function_coverage=1 00:15:26.703 --rc genhtml_legend=1 00:15:26.703 --rc geninfo_all_blocks=1 00:15:26.703 --rc geninfo_unexecuted_blocks=1 00:15:26.703 00:15:26.703 ' 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@50 -- # : 0 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:26.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:26.703 ************************************ 00:15:26.703 START TEST nvmf_example 00:15:26.703 ************************************ 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:26.703 * Looking for test storage... 00:15:26.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:15:26.703 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:26.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.967 --rc genhtml_branch_coverage=1 00:15:26.967 --rc genhtml_function_coverage=1 00:15:26.967 --rc genhtml_legend=1 00:15:26.967 --rc geninfo_all_blocks=1 00:15:26.967 --rc geninfo_unexecuted_blocks=1 00:15:26.967 00:15:26.967 ' 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:26.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.967 --rc genhtml_branch_coverage=1 00:15:26.967 --rc genhtml_function_coverage=1 00:15:26.967 --rc genhtml_legend=1 00:15:26.967 --rc geninfo_all_blocks=1 00:15:26.967 --rc geninfo_unexecuted_blocks=1 00:15:26.967 00:15:26.967 ' 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:26.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.967 --rc genhtml_branch_coverage=1 00:15:26.967 --rc genhtml_function_coverage=1 00:15:26.967 --rc genhtml_legend=1 00:15:26.967 --rc geninfo_all_blocks=1 00:15:26.967 --rc geninfo_unexecuted_blocks=1 00:15:26.967 00:15:26.967 ' 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:26.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.967 --rc genhtml_branch_coverage=1 00:15:26.967 --rc genhtml_function_coverage=1 00:15:26.967 --rc genhtml_legend=1 00:15:26.967 --rc geninfo_all_blocks=1 00:15:26.967 --rc geninfo_unexecuted_blocks=1 00:15:26.967 00:15:26.967 ' 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:26.967 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@50 -- # : 0 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:26.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # remove_target_ns 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # xtrace_disable 00:15:26.968 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # pci_devs=() 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # net_devs=() 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # e810=() 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # local -ga e810 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # x722=() 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # local -ga x722 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # mlx=() 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # local -ga mlx 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:35.120 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:35.120 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:35.120 Found net devices under 0000:31:00.0: cvl_0_0 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:35.120 Found net devices under 0000:31:00.1: cvl_0_1 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # is_hw=yes 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@247 -- # create_target_ns 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@28 -- # local -g _dev 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # ips=() 00:15:35.120 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772161 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:35.121 10.0.0.1 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772162 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:35.121 10.0.0.2 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:35.121 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:35.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.615 ms 00:15:35.384 00:15:35.384 --- 10.0.0.1 ping statistics --- 00:15:35.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.384 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target0 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:15:35.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:15:35.384 00:15:35.384 --- 10.0.0.2 ping statistics --- 00:15:35.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.384 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair++ )) 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # return 0 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator1 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # return 1 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev= 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@160 -- # return 0 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target0 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:35.384 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target1 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target1 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # return 1 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev= 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@160 -- # return 0 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:35.385 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.385 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:15:35.385 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:15:35.385 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1870698 00:15:35.385 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:35.385 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:15:35.385 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1870698 00:15:35.385 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1870698 ']' 00:15:35.385 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.385 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.385 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.385 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.385 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:36.329 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:48.566 Initializing NVMe Controllers 00:15:48.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:48.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:48.566 Initialization complete. Launching workers. 00:15:48.566 ======================================================== 00:15:48.566 Latency(us) 00:15:48.566 Device Information : IOPS MiB/s Average min max 00:15:48.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18328.54 71.60 3491.45 667.72 16451.81 00:15:48.566 ======================================================== 00:15:48.566 Total : 18328.54 71.60 3491.45 667.72 16451.81 00:15:48.566 00:15:48.566 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:15:48.566 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:15:48.566 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:48.566 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@99 -- # sync 00:15:48.566 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:48.566 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # set +e 00:15:48.566 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:48.566 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:48.566 rmmod nvme_tcp 00:15:48.566 rmmod nvme_fabrics 00:15:48.566 rmmod nvme_keyring 00:15:48.566 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:48.566 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # set -e 00:15:48.566 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # return 0 00:15:48.566 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # '[' -n 1870698 ']' 00:15:48.566 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@337 -- # killprocess 1870698 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1870698 ']' 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1870698 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1870698 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1870698' 00:15:48.567 killing process with pid 1870698 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1870698 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1870698 00:15:48.567 nvmf threads initialize successfully 00:15:48.567 bdev subsystem init successfully 00:15:48.567 created a nvmf target service 00:15:48.567 create targets's poll groups done 00:15:48.567 all subsystems of target started 00:15:48.567 nvmf target is running 00:15:48.567 all subsystems of target stopped 00:15:48.567 destroy targets's poll groups done 00:15:48.567 destroyed the nvmf target service 00:15:48.567 bdev subsystem finish successfully 00:15:48.567 nvmf threads destroy successfully 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # nvmf_fini 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@254 -- # local dev 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@257 -- # remove_target_ns 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:48.567 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@258 -- # delete_main_bridge 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@121 -- # return 0 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # _dev=0 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # dev_map=() 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@274 -- # iptr 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # iptables-save 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # iptables-restore 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:49.139 00:15:49.139 real 0m22.387s 00:15:49.139 user 0m47.193s 00:15:49.139 sys 0m7.528s 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:49.139 ************************************ 00:15:49.139 END TEST nvmf_example 00:15:49.139 ************************************ 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.139 08:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:49.139 ************************************ 00:15:49.140 START TEST nvmf_filesystem 00:15:49.140 ************************************ 00:15:49.140 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:49.140 * Looking for test storage... 00:15:49.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.140 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:49.140 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:15:49.140 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:49.406 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:49.406 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.406 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.406 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:49.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.407 --rc genhtml_branch_coverage=1 00:15:49.407 --rc genhtml_function_coverage=1 00:15:49.407 --rc genhtml_legend=1 00:15:49.407 --rc geninfo_all_blocks=1 00:15:49.407 --rc geninfo_unexecuted_blocks=1 00:15:49.407 00:15:49.407 ' 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:49.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.407 --rc genhtml_branch_coverage=1 00:15:49.407 --rc genhtml_function_coverage=1 00:15:49.407 --rc genhtml_legend=1 00:15:49.407 --rc geninfo_all_blocks=1 00:15:49.407 --rc geninfo_unexecuted_blocks=1 00:15:49.407 00:15:49.407 ' 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:49.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.407 --rc genhtml_branch_coverage=1 00:15:49.407 --rc genhtml_function_coverage=1 00:15:49.407 --rc genhtml_legend=1 00:15:49.407 --rc geninfo_all_blocks=1 00:15:49.407 --rc geninfo_unexecuted_blocks=1 00:15:49.407 00:15:49.407 ' 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:49.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.407 --rc genhtml_branch_coverage=1 00:15:49.407 --rc genhtml_function_coverage=1 00:15:49.407 --rc genhtml_legend=1 00:15:49.407 --rc geninfo_all_blocks=1 00:15:49.407 --rc geninfo_unexecuted_blocks=1 00:15:49.407 00:15:49.407 ' 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:49.407 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:15:49.408 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:49.408 #define SPDK_CONFIG_H 00:15:49.408 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:49.408 #define SPDK_CONFIG_APPS 1 00:15:49.408 #define SPDK_CONFIG_ARCH native 00:15:49.408 #undef SPDK_CONFIG_ASAN 00:15:49.408 #undef SPDK_CONFIG_AVAHI 00:15:49.408 #undef SPDK_CONFIG_CET 00:15:49.408 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:49.408 #define SPDK_CONFIG_COVERAGE 1 00:15:49.408 #define SPDK_CONFIG_CROSS_PREFIX 00:15:49.408 #undef SPDK_CONFIG_CRYPTO 00:15:49.408 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:49.408 #undef SPDK_CONFIG_CUSTOMOCF 00:15:49.408 #undef SPDK_CONFIG_DAOS 00:15:49.408 #define SPDK_CONFIG_DAOS_DIR 00:15:49.408 #define SPDK_CONFIG_DEBUG 1 00:15:49.408 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:49.408 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:49.408 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:49.408 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:49.408 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:49.408 #undef SPDK_CONFIG_DPDK_UADK 00:15:49.408 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:49.408 #define SPDK_CONFIG_EXAMPLES 1 00:15:49.408 #undef SPDK_CONFIG_FC 00:15:49.408 #define SPDK_CONFIG_FC_PATH 00:15:49.408 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:49.408 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:49.408 #define SPDK_CONFIG_FSDEV 1 00:15:49.408 #undef SPDK_CONFIG_FUSE 00:15:49.408 #undef SPDK_CONFIG_FUZZER 00:15:49.408 #define SPDK_CONFIG_FUZZER_LIB 00:15:49.408 #undef SPDK_CONFIG_GOLANG 00:15:49.408 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:49.408 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:49.408 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:49.408 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:49.408 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:49.408 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:49.408 #undef SPDK_CONFIG_HAVE_LZ4 00:15:49.408 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:49.408 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:49.408 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:49.408 #define SPDK_CONFIG_IDXD 1 00:15:49.408 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:49.408 #undef SPDK_CONFIG_IPSEC_MB 00:15:49.408 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:49.408 #define SPDK_CONFIG_ISAL 1 00:15:49.408 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:49.408 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:49.408 #define SPDK_CONFIG_LIBDIR 00:15:49.408 #undef SPDK_CONFIG_LTO 00:15:49.408 #define SPDK_CONFIG_MAX_LCORES 128 00:15:49.408 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:49.408 #define SPDK_CONFIG_NVME_CUSE 1 00:15:49.408 #undef SPDK_CONFIG_OCF 00:15:49.408 #define SPDK_CONFIG_OCF_PATH 00:15:49.408 #define SPDK_CONFIG_OPENSSL_PATH 00:15:49.408 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:49.408 #define SPDK_CONFIG_PGO_DIR 00:15:49.408 #undef SPDK_CONFIG_PGO_USE 00:15:49.408 #define SPDK_CONFIG_PREFIX /usr/local 00:15:49.408 #undef SPDK_CONFIG_RAID5F 00:15:49.408 #undef SPDK_CONFIG_RBD 00:15:49.408 #define SPDK_CONFIG_RDMA 1 00:15:49.408 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:49.408 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:49.408 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:49.408 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:49.408 #define SPDK_CONFIG_SHARED 1 00:15:49.408 #undef SPDK_CONFIG_SMA 00:15:49.408 #define SPDK_CONFIG_TESTS 1 00:15:49.408 #undef SPDK_CONFIG_TSAN 00:15:49.408 #define SPDK_CONFIG_UBLK 1 00:15:49.408 #define SPDK_CONFIG_UBSAN 1 00:15:49.408 #undef SPDK_CONFIG_UNIT_TESTS 00:15:49.408 #undef SPDK_CONFIG_URING 00:15:49.408 #define SPDK_CONFIG_URING_PATH 00:15:49.408 #undef SPDK_CONFIG_URING_ZNS 00:15:49.408 #undef SPDK_CONFIG_USDT 00:15:49.409 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:49.409 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:49.409 #define SPDK_CONFIG_VFIO_USER 1 00:15:49.409 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:49.409 #define SPDK_CONFIG_VHOST 1 00:15:49.409 #define SPDK_CONFIG_VIRTIO 1 00:15:49.409 #undef SPDK_CONFIG_VTUNE 00:15:49.409 #define SPDK_CONFIG_VTUNE_DIR 00:15:49.409 #define SPDK_CONFIG_WERROR 1 00:15:49.409 #define SPDK_CONFIG_WPDK_DIR 00:15:49.409 #undef SPDK_CONFIG_XNVME 00:15:49.409 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:15:49.409 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:49.410 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:15:49.411 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1873485 ]] 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1873485 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:15:49.411 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.mykrIL 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.mykrIL/tests/target /tmp/spdk.mykrIL 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122343731200 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356550144 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7012818944 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64670081024 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678273024 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=8192000 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847697408 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871310848 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23613440 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=175104 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=328704 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677658624 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678277120 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=618496 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:15:49.412 * Looking for test storage... 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122343731200 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9227411456 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.412 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:49.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.413 --rc genhtml_branch_coverage=1 00:15:49.413 --rc genhtml_function_coverage=1 00:15:49.413 --rc genhtml_legend=1 00:15:49.413 --rc geninfo_all_blocks=1 00:15:49.413 --rc geninfo_unexecuted_blocks=1 00:15:49.413 00:15:49.413 ' 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:49.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.413 --rc genhtml_branch_coverage=1 00:15:49.413 --rc genhtml_function_coverage=1 00:15:49.413 --rc genhtml_legend=1 00:15:49.413 --rc geninfo_all_blocks=1 00:15:49.413 --rc geninfo_unexecuted_blocks=1 00:15:49.413 00:15:49.413 ' 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:49.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.413 --rc genhtml_branch_coverage=1 00:15:49.413 --rc genhtml_function_coverage=1 00:15:49.413 --rc genhtml_legend=1 00:15:49.413 --rc geninfo_all_blocks=1 00:15:49.413 --rc geninfo_unexecuted_blocks=1 00:15:49.413 00:15:49.413 ' 00:15:49.413 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:49.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.413 --rc genhtml_branch_coverage=1 00:15:49.413 --rc genhtml_function_coverage=1 00:15:49.413 --rc genhtml_legend=1 00:15:49.413 --rc geninfo_all_blocks=1 00:15:49.413 --rc geninfo_unexecuted_blocks=1 00:15:49.413 00:15:49.413 ' 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.676 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@50 -- # : 0 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:49.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # remove_target_ns 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # xtrace_disable 00:15:49.677 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # pci_devs=() 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # net_devs=() 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # e810=() 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # local -ga e810 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # x722=() 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # local -ga x722 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # mlx=() 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # local -ga mlx 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:57.834 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:57.834 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:57.834 Found net devices under 0000:31:00.0: cvl_0_0 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:57.834 Found net devices under 0000:31:00.1: cvl_0_1 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # is_hw=yes 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@247 -- # create_target_ns 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@28 -- # local -g _dev 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # ips=() 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:57.834 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772161 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:57.835 10.0.0.1 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772162 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:57.835 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:58.165 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:58.165 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:15:58.165 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:58.165 10.0.0.2 00:15:58.165 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:15:58.165 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:15:58.165 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:58.165 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:15:58.165 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:15:58.165 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:58.165 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:58.165 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:58.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.675 ms 00:15:58.166 00:15:58.166 --- 10.0.0.1 ping statistics --- 00:15:58.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.166 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:15:58.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:15:58.166 00:15:58.166 --- 10.0.0.2 ping statistics --- 00:15:58.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.166 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # return 0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # return 1 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev= 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@160 -- # return 0 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:58.166 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target0 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target1 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # return 1 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev= 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@160 -- # return 0 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.167 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:58.517 ************************************ 00:15:58.517 START TEST nvmf_filesystem_no_in_capsule 00:15:58.517 ************************************ 00:15:58.517 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:15:58.517 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:15:58.517 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:58.517 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:58.517 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:58.517 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:58.517 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=1877859 00:15:58.517 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 1877859 00:15:58.517 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:58.517 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1877859 ']' 00:15:58.517 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.517 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.517 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.517 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.517 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:58.517 [2024-11-20 08:13:02.952764] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:15:58.517 [2024-11-20 08:13:02.952821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.517 [2024-11-20 08:13:03.043551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.517 [2024-11-20 08:13:03.080304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.517 [2024-11-20 08:13:03.080336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.517 [2024-11-20 08:13:03.080345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.517 [2024-11-20 08:13:03.080352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.517 [2024-11-20 08:13:03.080357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.517 [2024-11-20 08:13:03.081896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.517 [2024-11-20 08:13:03.081988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.517 [2024-11-20 08:13:03.082123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.517 [2024-11-20 08:13:03.082123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:59.088 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.088 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:15:59.088 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:59.088 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:59.088 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:59.088 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.088 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:59.088 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:59.088 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.088 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:59.088 [2024-11-20 08:13:03.787801] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.088 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.088 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:59.088 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.088 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:59.349 Malloc1 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:59.349 [2024-11-20 08:13:03.918422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.349 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:59.349 { 00:15:59.350 "name": "Malloc1", 00:15:59.350 "aliases": [ 00:15:59.350 "71f00838-28e7-45e5-803d-1dd06ab32ddb" 00:15:59.350 ], 00:15:59.350 "product_name": "Malloc disk", 00:15:59.350 "block_size": 512, 00:15:59.350 "num_blocks": 1048576, 00:15:59.350 "uuid": "71f00838-28e7-45e5-803d-1dd06ab32ddb", 00:15:59.350 "assigned_rate_limits": { 00:15:59.350 "rw_ios_per_sec": 0, 00:15:59.350 "rw_mbytes_per_sec": 0, 00:15:59.350 "r_mbytes_per_sec": 0, 00:15:59.350 "w_mbytes_per_sec": 0 00:15:59.350 }, 00:15:59.350 "claimed": true, 00:15:59.350 "claim_type": "exclusive_write", 00:15:59.350 "zoned": false, 00:15:59.350 "supported_io_types": { 00:15:59.350 "read": true, 00:15:59.350 "write": true, 00:15:59.350 "unmap": true, 00:15:59.350 "flush": true, 00:15:59.350 "reset": true, 00:15:59.350 "nvme_admin": false, 00:15:59.350 "nvme_io": false, 00:15:59.350 "nvme_io_md": false, 00:15:59.350 "write_zeroes": true, 00:15:59.350 "zcopy": true, 00:15:59.350 "get_zone_info": false, 00:15:59.350 "zone_management": false, 00:15:59.350 "zone_append": false, 00:15:59.350 "compare": false, 00:15:59.350 "compare_and_write": false, 00:15:59.350 "abort": true, 00:15:59.350 "seek_hole": false, 00:15:59.350 "seek_data": false, 00:15:59.350 "copy": true, 00:15:59.350 "nvme_iov_md": false 00:15:59.350 }, 00:15:59.350 "memory_domains": [ 00:15:59.350 { 00:15:59.350 "dma_device_id": "system", 00:15:59.350 "dma_device_type": 1 00:15:59.350 }, 00:15:59.350 { 00:15:59.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.350 "dma_device_type": 2 00:15:59.350 } 00:15:59.350 ], 00:15:59.350 "driver_specific": {} 00:15:59.350 } 00:15:59.350 ]' 00:15:59.350 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:59.350 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:15:59.350 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:59.350 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:15:59.350 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:15:59.350 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:15:59.350 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:59.350 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:01.265 08:13:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:01.265 08:13:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:16:01.265 08:13:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:01.265 08:13:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:01.265 08:13:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:03.178 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:03.438 08:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:04.381 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:16:04.381 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:04.381 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:04.381 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.381 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:04.641 ************************************ 00:16:04.641 START TEST filesystem_ext4 00:16:04.641 ************************************ 00:16:04.641 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:04.641 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:04.641 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:04.641 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:04.641 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:16:04.641 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:04.641 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:16:04.641 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:16:04.641 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:16:04.641 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:16:04.641 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:04.641 mke2fs 1.47.0 (5-Feb-2023) 00:16:04.641 Discarding device blocks: 0/522240 done 00:16:04.642 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:04.642 Filesystem UUID: 12ba07d6-560a-42f2-8887-9d161670e9fb 00:16:04.642 Superblock backups stored on blocks: 00:16:04.642 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:04.642 00:16:04.642 Allocating group tables: 0/64 done 00:16:04.642 Writing inode tables: 0/64 done 00:16:04.642 Creating journal (8192 blocks): done 00:16:04.642 Writing superblocks and filesystem accounting information: 0/64 done 00:16:04.642 00:16:04.642 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:16:04.642 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1877859 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:11.227 00:16:11.227 real 0m5.699s 00:16:11.227 user 0m0.032s 00:16:11.227 sys 0m0.080s 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:11.227 ************************************ 00:16:11.227 END TEST filesystem_ext4 00:16:11.227 ************************************ 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:11.227 ************************************ 00:16:11.227 START TEST filesystem_btrfs 00:16:11.227 ************************************ 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:11.227 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:11.228 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:11.228 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:16:11.228 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:11.228 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:16:11.228 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:16:11.228 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:16:11.228 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:16:11.228 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:11.228 btrfs-progs v6.8.1 00:16:11.228 See https://btrfs.readthedocs.io for more information. 00:16:11.228 00:16:11.228 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:11.228 NOTE: several default settings have changed in version 5.15, please make sure 00:16:11.228 this does not affect your deployments: 00:16:11.228 - DUP for metadata (-m dup) 00:16:11.228 - enabled no-holes (-O no-holes) 00:16:11.228 - enabled free-space-tree (-R free-space-tree) 00:16:11.228 00:16:11.228 Label: (null) 00:16:11.228 UUID: 999fbd53-4355-429e-8f8e-83a61f30759a 00:16:11.228 Node size: 16384 00:16:11.228 Sector size: 4096 (CPU page size: 4096) 00:16:11.228 Filesystem size: 510.00MiB 00:16:11.228 Block group profiles: 00:16:11.228 Data: single 8.00MiB 00:16:11.228 Metadata: DUP 32.00MiB 00:16:11.228 System: DUP 8.00MiB 00:16:11.228 SSD detected: yes 00:16:11.228 Zoned device: no 00:16:11.228 Features: extref, skinny-metadata, no-holes, free-space-tree 00:16:11.228 Checksum: crc32c 00:16:11.228 Number of devices: 1 00:16:11.228 Devices: 00:16:11.228 ID SIZE PATH 00:16:11.228 1 510.00MiB /dev/nvme0n1p1 00:16:11.228 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1877859 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:11.228 00:16:11.228 real 0m0.826s 00:16:11.228 user 0m0.029s 00:16:11.228 sys 0m0.117s 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:11.228 ************************************ 00:16:11.228 END TEST filesystem_btrfs 00:16:11.228 ************************************ 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:11.228 ************************************ 00:16:11.228 START TEST filesystem_xfs 00:16:11.228 ************************************ 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:16:11.228 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:11.228 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:11.228 = sectsz=512 attr=2, projid32bit=1 00:16:11.228 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:11.228 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:11.228 data = bsize=4096 blocks=130560, imaxpct=25 00:16:11.228 = sunit=0 swidth=0 blks 00:16:11.228 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:11.228 log =internal log bsize=4096 blocks=16384, version=2 00:16:11.228 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:11.228 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:12.172 Discarding blocks...Done. 00:16:12.172 08:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:16:12.172 08:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:14.716 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:14.716 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:16:14.716 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:14.716 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:16:14.716 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:16:14.716 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:14.716 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1877859 00:16:14.716 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:14.716 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:14.716 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:14.716 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:14.716 00:16:14.716 real 0m3.281s 00:16:14.716 user 0m0.027s 00:16:14.716 sys 0m0.079s 00:16:14.716 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.716 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:14.716 ************************************ 00:16:14.716 END TEST filesystem_xfs 00:16:14.716 ************************************ 00:16:14.716 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:14.716 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:14.717 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:14.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1877859 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1877859 ']' 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1877859 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1877859 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1877859' 00:16:14.978 killing process with pid 1877859 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1877859 00:16:14.978 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1877859 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:15.238 00:16:15.238 real 0m16.980s 00:16:15.238 user 1m7.095s 00:16:15.238 sys 0m1.388s 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:15.238 ************************************ 00:16:15.238 END TEST nvmf_filesystem_no_in_capsule 00:16:15.238 ************************************ 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:15.238 ************************************ 00:16:15.238 START TEST nvmf_filesystem_in_capsule 00:16:15.238 ************************************ 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=1881414 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 1881414 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1881414 ']' 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.238 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.239 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.239 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:15.499 [2024-11-20 08:13:20.010813] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:16:15.499 [2024-11-20 08:13:20.010860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.499 [2024-11-20 08:13:20.099059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:15.499 [2024-11-20 08:13:20.135279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.499 [2024-11-20 08:13:20.135317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.499 [2024-11-20 08:13:20.135325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.499 [2024-11-20 08:13:20.135332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.499 [2024-11-20 08:13:20.135337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.499 [2024-11-20 08:13:20.136898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.499 [2024-11-20 08:13:20.137098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.499 [2024-11-20 08:13:20.137225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.499 [2024-11-20 08:13:20.137226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:16.443 [2024-11-20 08:13:20.867251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:16.443 Malloc1 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.443 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:16.443 [2024-11-20 08:13:21.001538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.443 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.443 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:16.443 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:16:16.443 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:16.443 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:16:16.443 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:16:16.443 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:16.443 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.443 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:16.443 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.443 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:16.443 { 00:16:16.443 "name": "Malloc1", 00:16:16.443 "aliases": [ 00:16:16.443 "cc71d2f4-bdbb-4603-ae78-f392b6a13c1b" 00:16:16.443 ], 00:16:16.443 "product_name": "Malloc disk", 00:16:16.444 "block_size": 512, 00:16:16.444 "num_blocks": 1048576, 00:16:16.444 "uuid": "cc71d2f4-bdbb-4603-ae78-f392b6a13c1b", 00:16:16.444 "assigned_rate_limits": { 00:16:16.444 "rw_ios_per_sec": 0, 00:16:16.444 "rw_mbytes_per_sec": 0, 00:16:16.444 "r_mbytes_per_sec": 0, 00:16:16.444 "w_mbytes_per_sec": 0 00:16:16.444 }, 00:16:16.444 "claimed": true, 00:16:16.444 "claim_type": "exclusive_write", 00:16:16.444 "zoned": false, 00:16:16.444 "supported_io_types": { 00:16:16.444 "read": true, 00:16:16.444 "write": true, 00:16:16.444 "unmap": true, 00:16:16.444 "flush": true, 00:16:16.444 "reset": true, 00:16:16.444 "nvme_admin": false, 00:16:16.444 "nvme_io": false, 00:16:16.444 "nvme_io_md": false, 00:16:16.444 "write_zeroes": true, 00:16:16.444 "zcopy": true, 00:16:16.444 "get_zone_info": false, 00:16:16.444 "zone_management": false, 00:16:16.444 "zone_append": false, 00:16:16.444 "compare": false, 00:16:16.444 "compare_and_write": false, 00:16:16.444 "abort": true, 00:16:16.444 "seek_hole": false, 00:16:16.444 "seek_data": false, 00:16:16.444 "copy": true, 00:16:16.444 "nvme_iov_md": false 00:16:16.444 }, 00:16:16.444 "memory_domains": [ 00:16:16.444 { 00:16:16.444 "dma_device_id": "system", 00:16:16.444 "dma_device_type": 1 00:16:16.444 }, 00:16:16.444 { 00:16:16.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.444 "dma_device_type": 2 00:16:16.444 } 00:16:16.444 ], 00:16:16.444 "driver_specific": {} 00:16:16.444 } 00:16:16.444 ]' 00:16:16.444 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:16.444 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:16:16.444 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:16.444 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:16:16.444 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:16:16.444 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:16:16.444 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:16.444 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:18.359 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:18.359 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:16:18.359 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.359 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:18.359 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:20.274 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:20.845 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:21.787 ************************************ 00:16:21.787 START TEST filesystem_in_capsule_ext4 00:16:21.787 ************************************ 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:16:21.787 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:21.787 mke2fs 1.47.0 (5-Feb-2023) 00:16:21.787 Discarding device blocks: 0/522240 done 00:16:21.787 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:21.787 Filesystem UUID: 09909559-e833-4708-b965-937b6ea80f7f 00:16:21.787 Superblock backups stored on blocks: 00:16:21.787 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:21.787 00:16:21.787 Allocating group tables: 0/64 done 00:16:21.787 Writing inode tables: 0/64 done 00:16:22.048 Creating journal (8192 blocks): done 00:16:22.048 Writing superblocks and filesystem accounting information: 0/64 done 00:16:22.048 00:16:22.048 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:16:22.048 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:27.336 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1881414 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:27.596 00:16:27.596 real 0m5.705s 00:16:27.596 user 0m0.035s 00:16:27.596 sys 0m0.073s 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:27.596 ************************************ 00:16:27.596 END TEST filesystem_in_capsule_ext4 00:16:27.596 ************************************ 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:27.596 ************************************ 00:16:27.596 START TEST filesystem_in_capsule_btrfs 00:16:27.596 ************************************ 00:16:27.596 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:27.597 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:27.597 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:27.597 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:27.597 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:16:27.597 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:27.597 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:16:27.597 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:16:27.597 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:16:27.597 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:16:27.597 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:27.858 btrfs-progs v6.8.1 00:16:27.858 See https://btrfs.readthedocs.io for more information. 00:16:27.858 00:16:27.858 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:27.858 NOTE: several default settings have changed in version 5.15, please make sure 00:16:27.858 this does not affect your deployments: 00:16:27.858 - DUP for metadata (-m dup) 00:16:27.858 - enabled no-holes (-O no-holes) 00:16:27.858 - enabled free-space-tree (-R free-space-tree) 00:16:27.858 00:16:27.858 Label: (null) 00:16:27.858 UUID: 66f1f1d8-cfaa-4583-accb-ce65828565d1 00:16:27.858 Node size: 16384 00:16:27.858 Sector size: 4096 (CPU page size: 4096) 00:16:27.858 Filesystem size: 510.00MiB 00:16:27.858 Block group profiles: 00:16:27.858 Data: single 8.00MiB 00:16:27.858 Metadata: DUP 32.00MiB 00:16:27.858 System: DUP 8.00MiB 00:16:27.858 SSD detected: yes 00:16:27.858 Zoned device: no 00:16:27.858 Features: extref, skinny-metadata, no-holes, free-space-tree 00:16:27.858 Checksum: crc32c 00:16:27.858 Number of devices: 1 00:16:27.858 Devices: 00:16:27.858 ID SIZE PATH 00:16:27.858 1 510.00MiB /dev/nvme0n1p1 00:16:27.858 00:16:27.858 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:16:27.858 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:28.118 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:28.118 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:16:28.118 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:28.118 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:16:28.118 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:28.118 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:28.118 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1881414 00:16:28.118 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:28.118 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:28.118 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:28.118 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:28.118 00:16:28.118 real 0m0.587s 00:16:28.118 user 0m0.028s 00:16:28.118 sys 0m0.118s 00:16:28.118 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.118 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:28.118 ************************************ 00:16:28.118 END TEST filesystem_in_capsule_btrfs 00:16:28.118 ************************************ 00:16:28.378 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:16:28.378 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:28.378 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.378 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:28.378 ************************************ 00:16:28.378 START TEST filesystem_in_capsule_xfs 00:16:28.378 ************************************ 00:16:28.378 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:16:28.378 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:28.378 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:28.378 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:28.378 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:16:28.378 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:28.378 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:16:28.378 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:16:28.378 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:16:28.378 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:16:28.378 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:28.378 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:28.378 = sectsz=512 attr=2, projid32bit=1 00:16:28.378 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:28.378 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:28.378 data = bsize=4096 blocks=130560, imaxpct=25 00:16:28.378 = sunit=0 swidth=0 blks 00:16:28.378 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:28.378 log =internal log bsize=4096 blocks=16384, version=2 00:16:28.378 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:28.378 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:29.318 Discarding blocks...Done. 00:16:29.318 08:13:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:16:29.318 08:13:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:31.228 08:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:31.228 08:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:16:31.228 08:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:31.228 08:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:16:31.228 08:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:16:31.228 08:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:31.228 08:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1881414 00:16:31.228 08:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:31.228 08:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:31.228 08:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:31.228 08:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:31.228 00:16:31.228 real 0m2.914s 00:16:31.228 user 0m0.032s 00:16:31.228 sys 0m0.077s 00:16:31.228 08:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.228 08:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:31.228 ************************************ 00:16:31.228 END TEST filesystem_in_capsule_xfs 00:16:31.228 ************************************ 00:16:31.228 08:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:31.489 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:31.489 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:31.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1881414 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1881414 ']' 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1881414 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:16:31.750 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.751 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1881414 00:16:31.751 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:31.751 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:31.751 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1881414' 00:16:31.751 killing process with pid 1881414 00:16:31.751 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1881414 00:16:31.751 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1881414 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:32.011 00:16:32.011 real 0m16.655s 00:16:32.011 user 1m5.823s 00:16:32.011 sys 0m1.374s 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:32.011 ************************************ 00:16:32.011 END TEST nvmf_filesystem_in_capsule 00:16:32.011 ************************************ 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@99 -- # sync 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # set +e 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:32.011 rmmod nvme_tcp 00:16:32.011 rmmod nvme_fabrics 00:16:32.011 rmmod nvme_keyring 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # set -e 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # return 0 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # nvmf_fini 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@254 -- # local dev 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:32.011 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@121 -- # return 0 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # _dev=0 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # dev_map=() 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@274 -- # iptr 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # iptables-save 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # iptables-restore 00:16:34.555 00:16:34.555 real 0m45.048s 00:16:34.555 user 2m15.626s 00:16:34.555 sys 0m9.439s 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:34.555 ************************************ 00:16:34.555 END TEST nvmf_filesystem 00:16:34.555 ************************************ 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:34.555 ************************************ 00:16:34.555 START TEST nvmf_target_discovery 00:16:34.555 ************************************ 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:34.555 * Looking for test storage... 00:16:34.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:34.555 08:13:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:34.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.555 --rc genhtml_branch_coverage=1 00:16:34.555 --rc genhtml_function_coverage=1 00:16:34.555 --rc genhtml_legend=1 00:16:34.555 --rc geninfo_all_blocks=1 00:16:34.555 --rc geninfo_unexecuted_blocks=1 00:16:34.555 00:16:34.555 ' 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:34.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.555 --rc genhtml_branch_coverage=1 00:16:34.555 --rc genhtml_function_coverage=1 00:16:34.555 --rc genhtml_legend=1 00:16:34.555 --rc geninfo_all_blocks=1 00:16:34.555 --rc geninfo_unexecuted_blocks=1 00:16:34.555 00:16:34.555 ' 00:16:34.555 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:34.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.555 --rc genhtml_branch_coverage=1 00:16:34.556 --rc genhtml_function_coverage=1 00:16:34.556 --rc genhtml_legend=1 00:16:34.556 --rc geninfo_all_blocks=1 00:16:34.556 --rc geninfo_unexecuted_blocks=1 00:16:34.556 00:16:34.556 ' 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:34.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.556 --rc genhtml_branch_coverage=1 00:16:34.556 --rc genhtml_function_coverage=1 00:16:34.556 --rc genhtml_legend=1 00:16:34.556 --rc geninfo_all_blocks=1 00:16:34.556 --rc geninfo_unexecuted_blocks=1 00:16:34.556 00:16:34.556 ' 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@50 -- # : 0 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:34.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # nvmftestinit 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:16:34.556 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:34.557 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:34.557 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:34.557 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:34.557 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:34.557 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:16:34.557 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # e810=() 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # x722=() 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # mlx=() 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:42.695 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:42.696 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:42.696 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:42.696 Found net devices under 0000:31:00.0: cvl_0_0 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:42.696 Found net devices under 0000:31:00.1: cvl_0_1 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@247 -- # create_target_ns 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # ips=() 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:16:42.696 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:16:42.696 10.0.0.1 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:16:42.697 10.0.0.2 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:16:42.697 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:16:42.958 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:16:42.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.610 ms 00:16:42.959 00:16:42.959 --- 10.0.0.1 ping statistics --- 00:16:42.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.959 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:16:42.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:16:42.959 00:16:42.959 --- 10.0.0.2 ping statistics --- 00:16:42.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.959 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # return 0 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:16:42.959 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # return 1 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev= 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@160 -- # return 0 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # return 1 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev= 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@160 -- # return 0 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@16 -- # nvmfappstart -m 0xF 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # nvmfpid=1889700 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # waitforlisten 1889700 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1889700 ']' 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.960 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.960 [2024-11-20 08:13:47.683813] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:16:42.960 [2024-11-20 08:13:47.683880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.225 [2024-11-20 08:13:47.776885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.225 [2024-11-20 08:13:47.817624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.225 [2024-11-20 08:13:47.817660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.225 [2024-11-20 08:13:47.817669] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.225 [2024-11-20 08:13:47.817675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.225 [2024-11-20 08:13:47.817681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.225 [2024-11-20 08:13:47.819206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.225 [2024-11-20 08:13:47.819324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.225 [2024-11-20 08:13:47.819480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.225 [2024-11-20 08:13:47.819481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.797 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.797 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:43.797 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:43.797 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:43.797 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.058 [2024-11-20 08:13:48.545708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # seq 1 4 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null1 102400 512 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.058 Null1 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.058 [2024-11-20 08:13:48.606053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null2 102400 512 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.058 Null2 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null3 102400 512 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.058 Null3 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.058 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null4 102400 512 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.059 Null4 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.059 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:16:44.320 00:16:44.320 Discovery Log Number of Records 6, Generation counter 6 00:16:44.320 =====Discovery Log Entry 0====== 00:16:44.320 trtype: tcp 00:16:44.320 adrfam: ipv4 00:16:44.320 subtype: current discovery subsystem 00:16:44.320 treq: not required 00:16:44.320 portid: 0 00:16:44.320 trsvcid: 4420 00:16:44.320 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:44.320 traddr: 10.0.0.2 00:16:44.320 eflags: explicit discovery connections, duplicate discovery information 00:16:44.320 sectype: none 00:16:44.320 =====Discovery Log Entry 1====== 00:16:44.320 trtype: tcp 00:16:44.320 adrfam: ipv4 00:16:44.320 subtype: nvme subsystem 00:16:44.320 treq: not required 00:16:44.320 portid: 0 00:16:44.320 trsvcid: 4420 00:16:44.320 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:44.320 traddr: 10.0.0.2 00:16:44.320 eflags: none 00:16:44.320 sectype: none 00:16:44.320 =====Discovery Log Entry 2====== 00:16:44.320 trtype: tcp 00:16:44.320 adrfam: ipv4 00:16:44.320 subtype: nvme subsystem 00:16:44.320 treq: not required 00:16:44.320 portid: 0 00:16:44.320 trsvcid: 4420 00:16:44.320 subnqn: nqn.2016-06.io.spdk:cnode2 00:16:44.320 traddr: 10.0.0.2 00:16:44.320 eflags: none 00:16:44.320 sectype: none 00:16:44.320 =====Discovery Log Entry 3====== 00:16:44.320 trtype: tcp 00:16:44.320 adrfam: ipv4 00:16:44.320 subtype: nvme subsystem 00:16:44.320 treq: not required 00:16:44.320 portid: 0 00:16:44.320 trsvcid: 4420 00:16:44.320 subnqn: nqn.2016-06.io.spdk:cnode3 00:16:44.320 traddr: 10.0.0.2 00:16:44.320 eflags: none 00:16:44.320 sectype: none 00:16:44.320 =====Discovery Log Entry 4====== 00:16:44.320 trtype: tcp 00:16:44.320 adrfam: ipv4 00:16:44.320 subtype: nvme subsystem 00:16:44.320 treq: not required 00:16:44.320 portid: 0 00:16:44.320 trsvcid: 4420 00:16:44.320 subnqn: nqn.2016-06.io.spdk:cnode4 00:16:44.320 traddr: 10.0.0.2 00:16:44.320 eflags: none 00:16:44.320 sectype: none 00:16:44.320 =====Discovery Log Entry 5====== 00:16:44.320 trtype: tcp 00:16:44.320 adrfam: ipv4 00:16:44.320 subtype: discovery subsystem referral 00:16:44.320 treq: not required 00:16:44.320 portid: 0 00:16:44.320 trsvcid: 4430 00:16:44.321 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:44.321 traddr: 10.0.0.2 00:16:44.321 eflags: none 00:16:44.321 sectype: none 00:16:44.321 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@34 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:16:44.321 Perform nvmf subsystem discovery via RPC 00:16:44.321 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_get_subsystems 00:16:44.321 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.321 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.321 [ 00:16:44.321 { 00:16:44.321 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:44.321 "subtype": "Discovery", 00:16:44.321 "listen_addresses": [ 00:16:44.321 { 00:16:44.321 "trtype": "TCP", 00:16:44.321 "adrfam": "IPv4", 00:16:44.321 "traddr": "10.0.0.2", 00:16:44.321 "trsvcid": "4420" 00:16:44.321 } 00:16:44.321 ], 00:16:44.321 "allow_any_host": true, 00:16:44.321 "hosts": [] 00:16:44.321 }, 00:16:44.321 { 00:16:44.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.321 "subtype": "NVMe", 00:16:44.321 "listen_addresses": [ 00:16:44.321 { 00:16:44.321 "trtype": "TCP", 00:16:44.321 "adrfam": "IPv4", 00:16:44.321 "traddr": "10.0.0.2", 00:16:44.321 "trsvcid": "4420" 00:16:44.321 } 00:16:44.321 ], 00:16:44.321 "allow_any_host": true, 00:16:44.321 "hosts": [], 00:16:44.321 "serial_number": "SPDK00000000000001", 00:16:44.321 "model_number": "SPDK bdev Controller", 00:16:44.321 "max_namespaces": 32, 00:16:44.321 "min_cntlid": 1, 00:16:44.321 "max_cntlid": 65519, 00:16:44.321 "namespaces": [ 00:16:44.321 { 00:16:44.321 "nsid": 1, 00:16:44.321 "bdev_name": "Null1", 00:16:44.321 "name": "Null1", 00:16:44.321 "nguid": "1377AD97DC6145D3B912C7454B046A1D", 00:16:44.321 "uuid": "1377ad97-dc61-45d3-b912-c7454b046a1d" 00:16:44.321 } 00:16:44.321 ] 00:16:44.321 }, 00:16:44.321 { 00:16:44.321 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:44.321 "subtype": "NVMe", 00:16:44.321 "listen_addresses": [ 00:16:44.321 { 00:16:44.321 "trtype": "TCP", 00:16:44.321 "adrfam": "IPv4", 00:16:44.321 "traddr": "10.0.0.2", 00:16:44.321 "trsvcid": "4420" 00:16:44.321 } 00:16:44.321 ], 00:16:44.321 "allow_any_host": true, 00:16:44.321 "hosts": [], 00:16:44.321 "serial_number": "SPDK00000000000002", 00:16:44.321 "model_number": "SPDK bdev Controller", 00:16:44.321 "max_namespaces": 32, 00:16:44.321 "min_cntlid": 1, 00:16:44.321 "max_cntlid": 65519, 00:16:44.321 "namespaces": [ 00:16:44.321 { 00:16:44.321 "nsid": 1, 00:16:44.321 "bdev_name": "Null2", 00:16:44.321 "name": "Null2", 00:16:44.321 "nguid": "0E15263E43A54E39AA1AD8FE7273A8D4", 00:16:44.321 "uuid": "0e15263e-43a5-4e39-aa1a-d8fe7273a8d4" 00:16:44.321 } 00:16:44.321 ] 00:16:44.321 }, 00:16:44.321 { 00:16:44.321 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:16:44.321 "subtype": "NVMe", 00:16:44.321 "listen_addresses": [ 00:16:44.321 { 00:16:44.321 "trtype": "TCP", 00:16:44.321 "adrfam": "IPv4", 00:16:44.321 "traddr": "10.0.0.2", 00:16:44.321 "trsvcid": "4420" 00:16:44.321 } 00:16:44.321 ], 00:16:44.321 "allow_any_host": true, 00:16:44.321 "hosts": [], 00:16:44.321 "serial_number": "SPDK00000000000003", 00:16:44.321 "model_number": "SPDK bdev Controller", 00:16:44.321 "max_namespaces": 32, 00:16:44.321 "min_cntlid": 1, 00:16:44.321 "max_cntlid": 65519, 00:16:44.321 "namespaces": [ 00:16:44.321 { 00:16:44.321 "nsid": 1, 00:16:44.321 "bdev_name": "Null3", 00:16:44.321 "name": "Null3", 00:16:44.321 "nguid": "591CF01A29A54E50B36692C6A7B4A68C", 00:16:44.321 "uuid": "591cf01a-29a5-4e50-b366-92c6a7b4a68c" 00:16:44.321 } 00:16:44.321 ] 00:16:44.321 }, 00:16:44.321 { 00:16:44.321 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:16:44.321 "subtype": "NVMe", 00:16:44.321 "listen_addresses": [ 00:16:44.321 { 00:16:44.321 "trtype": "TCP", 00:16:44.321 "adrfam": "IPv4", 00:16:44.321 "traddr": "10.0.0.2", 00:16:44.321 "trsvcid": "4420" 00:16:44.321 } 00:16:44.321 ], 00:16:44.321 "allow_any_host": true, 00:16:44.321 "hosts": [], 00:16:44.321 "serial_number": "SPDK00000000000004", 00:16:44.321 "model_number": "SPDK bdev Controller", 00:16:44.321 "max_namespaces": 32, 00:16:44.321 "min_cntlid": 1, 00:16:44.321 "max_cntlid": 65519, 00:16:44.321 "namespaces": [ 00:16:44.321 { 00:16:44.321 "nsid": 1, 00:16:44.321 "bdev_name": "Null4", 00:16:44.321 "name": "Null4", 00:16:44.321 "nguid": "D9EF2987BA7746F488A599C866A0D8DA", 00:16:44.321 "uuid": "d9ef2987-ba77-46f4-88a5-99c866a0d8da" 00:16:44.321 } 00:16:44.321 ] 00:16:44.321 } 00:16:44.321 ] 00:16:44.321 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.321 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # seq 1 4 00:16:44.321 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:16:44.321 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.321 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.321 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null1 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null2 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.321 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null3 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null4 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_get_bdevs 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # jq -r '.[].name' 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # check_bdevs= 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@45 -- # '[' -n '' ']' 00:16:44.582 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@52 -- # nvmftestfini 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@99 -- # sync 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # set +e 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:44.583 rmmod nvme_tcp 00:16:44.583 rmmod nvme_fabrics 00:16:44.583 rmmod nvme_keyring 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # set -e 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # return 0 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # '[' -n 1889700 ']' 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@337 -- # killprocess 1889700 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1889700 ']' 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1889700 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1889700 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1889700' 00:16:44.583 killing process with pid 1889700 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1889700 00:16:44.583 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1889700 00:16:44.843 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:44.843 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:16:44.844 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@254 -- # local dev 00:16:44.844 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:16:44.844 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:44.844 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:44.844 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@121 -- # return 0 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@274 -- # iptr 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # iptables-save 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:16:46.758 00:16:46.758 real 0m12.604s 00:16:46.758 user 0m9.004s 00:16:46.758 sys 0m6.778s 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.758 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.020 ************************************ 00:16:47.020 END TEST nvmf_target_discovery 00:16:47.020 ************************************ 00:16:47.020 08:13:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:47.020 08:13:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:47.020 08:13:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.020 08:13:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:47.020 ************************************ 00:16:47.020 START TEST nvmf_referrals 00:16:47.020 ************************************ 00:16:47.020 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:47.020 * Looking for test storage... 00:16:47.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:47.020 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:47.020 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:16:47.020 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.283 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:47.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.284 --rc genhtml_branch_coverage=1 00:16:47.284 --rc genhtml_function_coverage=1 00:16:47.284 --rc genhtml_legend=1 00:16:47.284 --rc geninfo_all_blocks=1 00:16:47.284 --rc geninfo_unexecuted_blocks=1 00:16:47.284 00:16:47.284 ' 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:47.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.284 --rc genhtml_branch_coverage=1 00:16:47.284 --rc genhtml_function_coverage=1 00:16:47.284 --rc genhtml_legend=1 00:16:47.284 --rc geninfo_all_blocks=1 00:16:47.284 --rc geninfo_unexecuted_blocks=1 00:16:47.284 00:16:47.284 ' 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:47.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.284 --rc genhtml_branch_coverage=1 00:16:47.284 --rc genhtml_function_coverage=1 00:16:47.284 --rc genhtml_legend=1 00:16:47.284 --rc geninfo_all_blocks=1 00:16:47.284 --rc geninfo_unexecuted_blocks=1 00:16:47.284 00:16:47.284 ' 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:47.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.284 --rc genhtml_branch_coverage=1 00:16:47.284 --rc genhtml_function_coverage=1 00:16:47.284 --rc genhtml_legend=1 00:16:47.284 --rc geninfo_all_blocks=1 00:16:47.284 --rc geninfo_unexecuted_blocks=1 00:16:47.284 00:16:47.284 ' 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@50 -- # : 0 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:47.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # remove_target_ns 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # xtrace_disable 00:16:47.284 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # pci_devs=() 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # local -a pci_devs 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # pci_net_devs=() 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # pci_drivers=() 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # local -A pci_drivers 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # net_devs=() 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # local -ga net_devs 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # e810=() 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # local -ga e810 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # x722=() 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # local -ga x722 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # mlx=() 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # local -ga mlx 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:55.438 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:55.438 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:55.438 Found net devices under 0000:31:00.0: cvl_0_0 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:55.438 Found net devices under 0000:31:00.1: cvl_0_1 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # is_hw=yes 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@247 -- # create_target_ns 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:16:55.438 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@28 -- # local -g _dev 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # ips=() 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772161 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:16:55.439 10.0.0.1 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772162 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:16:55.439 10.0.0.2 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:16:55.439 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@38 -- # ping_ips 1 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:16:55.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.602 ms 00:16:55.439 00:16:55.439 --- 10.0.0.1 ping statistics --- 00:16:55.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.439 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target0 00:16:55.439 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:16:55.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:16:55.440 00:16:55.440 --- 10.0.0.2 ping statistics --- 00:16:55.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.440 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair++ )) 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # return 0 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # return 1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev= 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@160 -- # return 0 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target0 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # return 1 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev= 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@160 -- # return 0 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:16:55.440 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:16:55.701 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:16:55.701 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:55.701 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:55.701 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.701 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # nvmfpid=1894779 00:16:55.701 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # waitforlisten 1894779 00:16:55.701 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1894779 ']' 00:16:55.701 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.701 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.701 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.701 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.701 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.701 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:55.701 [2024-11-20 08:14:00.252399] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:16:55.701 [2024-11-20 08:14:00.252467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.701 [2024-11-20 08:14:00.343348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:55.701 [2024-11-20 08:14:00.384840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.701 [2024-11-20 08:14:00.384882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.701 [2024-11-20 08:14:00.384890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.701 [2024-11-20 08:14:00.384902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.701 [2024-11-20 08:14:00.384907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.701 [2024-11-20 08:14:00.386749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.701 [2024-11-20 08:14:00.386887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.701 [2024-11-20 08:14:00.386990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:55.701 [2024-11-20 08:14:00.386990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.644 [2024-11-20 08:14:01.113649] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.644 [2024-11-20 08:14:01.129874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.644 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:56.645 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:56.906 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:57.166 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:57.166 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:16:57.166 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:16:57.166 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.166 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:57.166 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.166 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:57.167 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:57.428 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:16:57.428 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:57.428 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:16:57.428 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:16:57.428 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:57.428 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:57.428 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:16:57.690 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:57.952 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:16:57.952 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:57.952 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:57.952 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:57.952 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:57.952 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:57.952 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:16:57.952 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:57.952 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:16:57.952 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:16:57.952 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:57.952 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:57.952 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:58.213 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:16:58.213 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:16:58.213 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:16:58.213 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:58.213 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:58.213 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:58.474 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:58.474 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:16:58.474 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.474 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:58.474 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.474 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:58.474 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:16:58.474 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.474 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:58.474 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.474 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:16:58.474 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:16:58.474 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:58.474 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:58.474 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:58.474 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:58.474 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@99 -- # sync 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # set +e 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:58.735 rmmod nvme_tcp 00:16:58.735 rmmod nvme_fabrics 00:16:58.735 rmmod nvme_keyring 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # set -e 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # return 0 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # '[' -n 1894779 ']' 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@337 -- # killprocess 1894779 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1894779 ']' 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1894779 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1894779 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1894779' 00:16:58.735 killing process with pid 1894779 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1894779 00:16:58.735 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1894779 00:16:58.995 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:58.995 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # nvmf_fini 00:16:58.995 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@254 -- # local dev 00:16:58.995 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@257 -- # remove_target_ns 00:16:58.995 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:58.995 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:58.995 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@121 -- # return 0 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # _dev=0 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # dev_map=() 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@274 -- # iptr 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # iptables-restore 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # iptables-save 00:17:00.987 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:00.987 00:17:00.987 real 0m14.066s 00:17:00.987 user 0m16.001s 00:17:00.988 sys 0m7.085s 00:17:00.988 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.988 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.988 ************************************ 00:17:00.988 END TEST nvmf_referrals 00:17:00.988 ************************************ 00:17:00.988 08:14:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:00.988 08:14:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:00.988 08:14:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.988 08:14:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:01.304 ************************************ 00:17:01.304 START TEST nvmf_connect_disconnect 00:17:01.304 ************************************ 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:01.304 * Looking for test storage... 00:17:01.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:01.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.304 --rc genhtml_branch_coverage=1 00:17:01.304 --rc genhtml_function_coverage=1 00:17:01.304 --rc genhtml_legend=1 00:17:01.304 --rc geninfo_all_blocks=1 00:17:01.304 --rc geninfo_unexecuted_blocks=1 00:17:01.304 00:17:01.304 ' 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:01.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.304 --rc genhtml_branch_coverage=1 00:17:01.304 --rc genhtml_function_coverage=1 00:17:01.304 --rc genhtml_legend=1 00:17:01.304 --rc geninfo_all_blocks=1 00:17:01.304 --rc geninfo_unexecuted_blocks=1 00:17:01.304 00:17:01.304 ' 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:01.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.304 --rc genhtml_branch_coverage=1 00:17:01.304 --rc genhtml_function_coverage=1 00:17:01.304 --rc genhtml_legend=1 00:17:01.304 --rc geninfo_all_blocks=1 00:17:01.304 --rc geninfo_unexecuted_blocks=1 00:17:01.304 00:17:01.304 ' 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:01.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.304 --rc genhtml_branch_coverage=1 00:17:01.304 --rc genhtml_function_coverage=1 00:17:01.304 --rc genhtml_legend=1 00:17:01.304 --rc geninfo_all_blocks=1 00:17:01.304 --rc geninfo_unexecuted_blocks=1 00:17:01.304 00:17:01.304 ' 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.304 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@50 -- # : 0 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:01.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:17:01.305 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # e810=() 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # x722=() 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:09.472 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:09.472 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:09.472 Found net devices under 0000:31:00.0: cvl_0_0 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:09.472 Found net devices under 0000:31:00.1: cvl_0_1 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@247 -- # create_target_ns 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:09.472 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:17:09.473 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:09.473 10.0.0.1 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:17:09.473 10.0.0.2 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:17:09.473 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:09.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.682 ms 00:17:09.735 00:17:09.735 --- 10.0.0.1 ping statistics --- 00:17:09.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.735 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:09.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:17:09.735 00:17:09.735 --- 10.0.0.2 ping statistics --- 00:17:09.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.735 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # return 0 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:09.735 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # return 1 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev= 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@160 -- # return 0 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target1 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # return 1 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev= 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@160 -- # return 0 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # nvmfpid=1900262 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # waitforlisten 1900262 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1900262 ']' 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.736 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:09.736 [2024-11-20 08:14:14.419928] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:17:09.736 [2024-11-20 08:14:14.419992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.996 [2024-11-20 08:14:14.510464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.996 [2024-11-20 08:14:14.551781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.996 [2024-11-20 08:14:14.551817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.996 [2024-11-20 08:14:14.551825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.996 [2024-11-20 08:14:14.551832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.996 [2024-11-20 08:14:14.551838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.996 [2024-11-20 08:14:14.553691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.996 [2024-11-20 08:14:14.553827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.996 [2024-11-20 08:14:14.553987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.996 [2024-11-20 08:14:14.553987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.568 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.568 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:17:10.568 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:10.568 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:10.568 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:10.568 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.568 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:17:10.568 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.568 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:10.568 [2024-11-20 08:14:15.288913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:10.828 [2024-11-20 08:14:15.357186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:17:10.828 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:17:15.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.141 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:29.141 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:29.141 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:29.141 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@99 -- # sync 00:17:29.141 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:29.141 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # set +e 00:17:29.141 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:29.141 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:29.141 rmmod nvme_tcp 00:17:29.141 rmmod nvme_fabrics 00:17:29.141 rmmod nvme_keyring 00:17:29.141 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:29.141 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # set -e 00:17:29.141 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # return 0 00:17:29.141 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # '[' -n 1900262 ']' 00:17:29.141 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@337 -- # killprocess 1900262 00:17:29.142 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1900262 ']' 00:17:29.142 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1900262 00:17:29.142 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:17:29.142 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.142 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1900262 00:17:29.402 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:29.402 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:29.402 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1900262' 00:17:29.402 killing process with pid 1900262 00:17:29.402 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1900262 00:17:29.402 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1900262 00:17:29.402 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:29.402 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:17:29.402 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@254 -- # local dev 00:17:29.402 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:29.402 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:29.402 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:29.403 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@121 -- # return 0 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@274 -- # iptr 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # iptables-save 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # iptables-restore 00:17:31.948 00:17:31.948 real 0m30.374s 00:17:31.948 user 1m19.786s 00:17:31.948 sys 0m7.826s 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:31.948 ************************************ 00:17:31.948 END TEST nvmf_connect_disconnect 00:17:31.948 ************************************ 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:31.948 ************************************ 00:17:31.948 START TEST nvmf_multitarget 00:17:31.948 ************************************ 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:31.948 * Looking for test storage... 00:17:31.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:31.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.948 --rc genhtml_branch_coverage=1 00:17:31.948 --rc genhtml_function_coverage=1 00:17:31.948 --rc genhtml_legend=1 00:17:31.948 --rc geninfo_all_blocks=1 00:17:31.948 --rc geninfo_unexecuted_blocks=1 00:17:31.948 00:17:31.948 ' 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:31.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.948 --rc genhtml_branch_coverage=1 00:17:31.948 --rc genhtml_function_coverage=1 00:17:31.948 --rc genhtml_legend=1 00:17:31.948 --rc geninfo_all_blocks=1 00:17:31.948 --rc geninfo_unexecuted_blocks=1 00:17:31.948 00:17:31.948 ' 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:31.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.948 --rc genhtml_branch_coverage=1 00:17:31.948 --rc genhtml_function_coverage=1 00:17:31.948 --rc genhtml_legend=1 00:17:31.948 --rc geninfo_all_blocks=1 00:17:31.948 --rc geninfo_unexecuted_blocks=1 00:17:31.948 00:17:31.948 ' 00:17:31.948 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:31.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.948 --rc genhtml_branch_coverage=1 00:17:31.949 --rc genhtml_function_coverage=1 00:17:31.949 --rc genhtml_legend=1 00:17:31.949 --rc geninfo_all_blocks=1 00:17:31.949 --rc geninfo_unexecuted_blocks=1 00:17:31.949 00:17:31.949 ' 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@50 -- # : 0 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:31.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # remove_target_ns 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # xtrace_disable 00:17:31.949 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # pci_devs=() 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # net_devs=() 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # e810=() 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # local -ga e810 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # x722=() 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # local -ga x722 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # mlx=() 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # local -ga mlx 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.097 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:40.098 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:40.098 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:40.099 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:40.099 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:40.099 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.099 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.099 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:40.099 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:40.099 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:40.099 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:40.099 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:40.099 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:40.099 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.099 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:40.100 Found net devices under 0000:31:00.0: cvl_0_0 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:40.100 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:40.101 Found net devices under 0000:31:00.1: cvl_0_1 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # is_hw=yes 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@247 -- # create_target_ns 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@28 -- # local -g _dev 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:17:40.101 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # ips=() 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772161 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:40.102 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:17:40.103 10.0.0.1 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772162 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:17:40.103 10.0.0.2 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:40.103 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:40.104 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:17:40.104 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:40.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.640 ms 00:17:40.367 00:17:40.367 --- 10.0.0.1 ping statistics --- 00:17:40.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.367 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target0 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:40.367 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:40.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:17:40.368 00:17:40.368 --- 10.0.0.2 ping statistics --- 00:17:40.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.368 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # return 0 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # return 1 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev= 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@160 -- # return 0 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:17:40.368 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target0 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target1 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # return 1 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev= 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@160 -- # return 0 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.368 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:40.630 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # nvmfpid=1909047 00:17:40.630 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # waitforlisten 1909047 00:17:40.630 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:40.630 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1909047 ']' 00:17:40.630 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.630 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.630 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.630 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.630 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:40.630 [2024-11-20 08:14:45.150590] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:17:40.630 [2024-11-20 08:14:45.150658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.630 [2024-11-20 08:14:45.246740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:40.630 [2024-11-20 08:14:45.289518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.630 [2024-11-20 08:14:45.289557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.630 [2024-11-20 08:14:45.289566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.630 [2024-11-20 08:14:45.289573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.630 [2024-11-20 08:14:45.289578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.630 [2024-11-20 08:14:45.291329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.630 [2024-11-20 08:14:45.291448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.630 [2024-11-20 08:14:45.291606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.630 [2024-11-20 08:14:45.291606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:41.571 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.571 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:41.571 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:41.571 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:41.571 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:41.571 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.571 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:41.571 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:41.571 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:41.571 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:41.571 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:41.571 "nvmf_tgt_1" 00:17:41.571 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:41.571 "nvmf_tgt_2" 00:17:41.832 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:41.832 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:41.832 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:41.832 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:41.832 true 00:17:41.832 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:42.094 true 00:17:42.094 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:42.094 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:42.094 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:42.094 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:42.094 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:42.094 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:42.094 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@99 -- # sync 00:17:42.094 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:42.095 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # set +e 00:17:42.095 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:42.095 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:42.095 rmmod nvme_tcp 00:17:42.095 rmmod nvme_fabrics 00:17:42.095 rmmod nvme_keyring 00:17:42.095 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:42.357 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # set -e 00:17:42.357 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # return 0 00:17:42.357 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # '[' -n 1909047 ']' 00:17:42.357 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@337 -- # killprocess 1909047 00:17:42.357 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1909047 ']' 00:17:42.357 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1909047 00:17:42.357 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:42.357 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.357 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1909047 00:17:42.357 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.357 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.357 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1909047' 00:17:42.357 killing process with pid 1909047 00:17:42.357 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1909047 00:17:42.357 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1909047 00:17:42.357 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:42.357 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # nvmf_fini 00:17:42.357 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@254 -- # local dev 00:17:42.357 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:42.357 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:42.357 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:42.357 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@121 -- # return 0 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # _dev=0 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # dev_map=() 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@274 -- # iptr 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # iptables-save 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # iptables-restore 00:17:44.900 00:17:44.900 real 0m12.930s 00:17:44.900 user 0m10.208s 00:17:44.900 sys 0m6.991s 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:44.900 ************************************ 00:17:44.900 END TEST nvmf_multitarget 00:17:44.900 ************************************ 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:44.900 ************************************ 00:17:44.900 START TEST nvmf_rpc 00:17:44.900 ************************************ 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:44.900 * Looking for test storage... 00:17:44.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:44.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.900 --rc genhtml_branch_coverage=1 00:17:44.900 --rc genhtml_function_coverage=1 00:17:44.900 --rc genhtml_legend=1 00:17:44.900 --rc geninfo_all_blocks=1 00:17:44.900 --rc geninfo_unexecuted_blocks=1 00:17:44.900 00:17:44.900 ' 00:17:44.900 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:44.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.900 --rc genhtml_branch_coverage=1 00:17:44.900 --rc genhtml_function_coverage=1 00:17:44.900 --rc genhtml_legend=1 00:17:44.900 --rc geninfo_all_blocks=1 00:17:44.900 --rc geninfo_unexecuted_blocks=1 00:17:44.900 00:17:44.900 ' 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:44.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.901 --rc genhtml_branch_coverage=1 00:17:44.901 --rc genhtml_function_coverage=1 00:17:44.901 --rc genhtml_legend=1 00:17:44.901 --rc geninfo_all_blocks=1 00:17:44.901 --rc geninfo_unexecuted_blocks=1 00:17:44.901 00:17:44.901 ' 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:44.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.901 --rc genhtml_branch_coverage=1 00:17:44.901 --rc genhtml_function_coverage=1 00:17:44.901 --rc genhtml_legend=1 00:17:44.901 --rc geninfo_all_blocks=1 00:17:44.901 --rc geninfo_unexecuted_blocks=1 00:17:44.901 00:17:44.901 ' 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@50 -- # : 0 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:44.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # remove_target_ns 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # xtrace_disable 00:17:44.901 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # pci_devs=() 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # net_devs=() 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # e810=() 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # local -ga e810 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # x722=() 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # local -ga x722 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # mlx=() 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # local -ga mlx 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:53.045 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:53.045 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:53.045 Found net devices under 0000:31:00.0: cvl_0_0 00:17:53.045 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:53.046 Found net devices under 0000:31:00.1: cvl_0_1 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # is_hw=yes 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@247 -- # create_target_ns 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@28 -- # local -g _dev 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # ips=() 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:17:53.046 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772161 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:17:53.308 10.0.0.1 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772162 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:17:53.308 10.0.0.2 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:53.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.568 ms 00:17:53.308 00:17:53.308 --- 10.0.0.1 ping statistics --- 00:17:53.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.308 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:53.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target0 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:53.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:17:53.309 00:17:53.309 --- 10.0.0.2 ping statistics --- 00:17:53.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.309 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # return 0 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:53.309 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # return 1 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev= 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@160 -- # return 0 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target0 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:53.309 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target1 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # return 1 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev= 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@160 -- # return 0 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.571 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:53.572 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:53.572 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:53.572 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:53.572 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.572 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.572 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # nvmfpid=1914192 00:17:53.572 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # waitforlisten 1914192 00:17:53.572 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:53.572 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1914192 ']' 00:17:53.572 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.572 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.572 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.572 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.572 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.572 [2024-11-20 08:14:58.152121] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:17:53.572 [2024-11-20 08:14:58.152196] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.572 [2024-11-20 08:14:58.242468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.572 [2024-11-20 08:14:58.282916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.572 [2024-11-20 08:14:58.282949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.572 [2024-11-20 08:14:58.282955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.572 [2024-11-20 08:14:58.282960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.572 [2024-11-20 08:14:58.282964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.572 [2024-11-20 08:14:58.284236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.572 [2024-11-20 08:14:58.284351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.572 [2024-11-20 08:14:58.284507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.572 [2024-11-20 08:14:58.284509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.514 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.514 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:54.514 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:54.514 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.514 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:54.514 "tick_rate": 2400000000, 00:17:54.514 "poll_groups": [ 00:17:54.514 { 00:17:54.514 "name": "nvmf_tgt_poll_group_000", 00:17:54.514 "admin_qpairs": 0, 00:17:54.514 "io_qpairs": 0, 00:17:54.514 "current_admin_qpairs": 0, 00:17:54.514 "current_io_qpairs": 0, 00:17:54.514 "pending_bdev_io": 0, 00:17:54.514 "completed_nvme_io": 0, 00:17:54.514 "transports": [] 00:17:54.514 }, 00:17:54.514 { 00:17:54.514 "name": "nvmf_tgt_poll_group_001", 00:17:54.514 "admin_qpairs": 0, 00:17:54.514 "io_qpairs": 0, 00:17:54.514 "current_admin_qpairs": 0, 00:17:54.514 "current_io_qpairs": 0, 00:17:54.514 "pending_bdev_io": 0, 00:17:54.514 "completed_nvme_io": 0, 00:17:54.514 "transports": [] 00:17:54.514 }, 00:17:54.514 { 00:17:54.514 "name": "nvmf_tgt_poll_group_002", 00:17:54.514 "admin_qpairs": 0, 00:17:54.514 "io_qpairs": 0, 00:17:54.514 "current_admin_qpairs": 0, 00:17:54.514 "current_io_qpairs": 0, 00:17:54.514 "pending_bdev_io": 0, 00:17:54.514 "completed_nvme_io": 0, 00:17:54.514 "transports": [] 00:17:54.514 }, 00:17:54.514 { 00:17:54.514 "name": "nvmf_tgt_poll_group_003", 00:17:54.514 "admin_qpairs": 0, 00:17:54.514 "io_qpairs": 0, 00:17:54.514 "current_admin_qpairs": 0, 00:17:54.514 "current_io_qpairs": 0, 00:17:54.514 "pending_bdev_io": 0, 00:17:54.514 "completed_nvme_io": 0, 00:17:54.514 "transports": [] 00:17:54.514 } 00:17:54.514 ] 00:17:54.514 }' 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.514 [2024-11-20 08:14:59.134133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:54.514 "tick_rate": 2400000000, 00:17:54.514 "poll_groups": [ 00:17:54.514 { 00:17:54.514 "name": "nvmf_tgt_poll_group_000", 00:17:54.514 "admin_qpairs": 0, 00:17:54.514 "io_qpairs": 0, 00:17:54.514 "current_admin_qpairs": 0, 00:17:54.514 "current_io_qpairs": 0, 00:17:54.514 "pending_bdev_io": 0, 00:17:54.514 "completed_nvme_io": 0, 00:17:54.514 "transports": [ 00:17:54.514 { 00:17:54.514 "trtype": "TCP" 00:17:54.514 } 00:17:54.514 ] 00:17:54.514 }, 00:17:54.514 { 00:17:54.514 "name": "nvmf_tgt_poll_group_001", 00:17:54.514 "admin_qpairs": 0, 00:17:54.514 "io_qpairs": 0, 00:17:54.514 "current_admin_qpairs": 0, 00:17:54.514 "current_io_qpairs": 0, 00:17:54.514 "pending_bdev_io": 0, 00:17:54.514 "completed_nvme_io": 0, 00:17:54.514 "transports": [ 00:17:54.514 { 00:17:54.514 "trtype": "TCP" 00:17:54.514 } 00:17:54.514 ] 00:17:54.514 }, 00:17:54.514 { 00:17:54.514 "name": "nvmf_tgt_poll_group_002", 00:17:54.514 "admin_qpairs": 0, 00:17:54.514 "io_qpairs": 0, 00:17:54.514 "current_admin_qpairs": 0, 00:17:54.514 "current_io_qpairs": 0, 00:17:54.514 "pending_bdev_io": 0, 00:17:54.514 "completed_nvme_io": 0, 00:17:54.514 "transports": [ 00:17:54.514 { 00:17:54.514 "trtype": "TCP" 00:17:54.514 } 00:17:54.514 ] 00:17:54.514 }, 00:17:54.514 { 00:17:54.514 "name": "nvmf_tgt_poll_group_003", 00:17:54.514 "admin_qpairs": 0, 00:17:54.514 "io_qpairs": 0, 00:17:54.514 "current_admin_qpairs": 0, 00:17:54.514 "current_io_qpairs": 0, 00:17:54.514 "pending_bdev_io": 0, 00:17:54.514 "completed_nvme_io": 0, 00:17:54.514 "transports": [ 00:17:54.514 { 00:17:54.514 "trtype": "TCP" 00:17:54.514 } 00:17:54.514 ] 00:17:54.514 } 00:17:54.514 ] 00:17:54.514 }' 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:54.514 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.775 Malloc1 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.775 [2024-11-20 08:14:59.344469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:54.775 [2024-11-20 08:14:59.381509] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:17:54.775 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:54.775 could not add new controller: failed to write to nvme-fabrics device 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.775 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:56.691 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:56.691 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:56.691 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:56.691 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:56.691 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:58.605 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:58.605 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:58.605 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:58.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:58.605 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:58.605 [2024-11-20 08:15:03.164550] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:17:58.605 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:58.606 could not add new controller: failed to write to nvme-fabrics device 00:17:58.606 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:58.606 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.606 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.606 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.606 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:58.606 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.606 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.606 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.606 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:00.519 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:00.519 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:00.519 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:00.519 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:00.519 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:02.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.433 [2024-11-20 08:15:06.936241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.433 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:03.820 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:03.820 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:03.820 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.820 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:03.820 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:06.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.366 [2024-11-20 08:15:10.705950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.366 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:06.367 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.367 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.367 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.367 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:07.752 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:07.752 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:07.752 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:07.752 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:07.752 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:09.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.751 [2024-11-20 08:15:14.454686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.751 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:10.047 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:11.450 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:11.450 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:11.450 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.450 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:11.450 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:13.365 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:13.365 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:13.365 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:13.365 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:13.365 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.365 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:13.365 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:13.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.629 [2024-11-20 08:15:18.222680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.629 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:15.546 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:15.546 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:15.546 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:15.546 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:15.546 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:17.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.462 [2024-11-20 08:15:21.980818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.462 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.462 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.462 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:18.847 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:18.847 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:18.847 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.847 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:18.847 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:21.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.392 [2024-11-20 08:15:25.712657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:21.392 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 [2024-11-20 08:15:25.780814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 [2024-11-20 08:15:25.845012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 [2024-11-20 08:15:25.913233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.393 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.394 [2024-11-20 08:15:25.977445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.394 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:21.394 "tick_rate": 2400000000, 00:18:21.394 "poll_groups": [ 00:18:21.394 { 00:18:21.394 "name": "nvmf_tgt_poll_group_000", 00:18:21.394 "admin_qpairs": 0, 00:18:21.394 "io_qpairs": 224, 00:18:21.394 "current_admin_qpairs": 0, 00:18:21.394 "current_io_qpairs": 0, 00:18:21.394 "pending_bdev_io": 0, 00:18:21.394 "completed_nvme_io": 518, 00:18:21.394 "transports": [ 00:18:21.394 { 00:18:21.394 "trtype": "TCP" 00:18:21.394 } 00:18:21.394 ] 00:18:21.394 }, 00:18:21.394 { 00:18:21.394 "name": "nvmf_tgt_poll_group_001", 00:18:21.394 "admin_qpairs": 1, 00:18:21.394 "io_qpairs": 223, 00:18:21.394 "current_admin_qpairs": 0, 00:18:21.394 "current_io_qpairs": 0, 00:18:21.394 "pending_bdev_io": 0, 00:18:21.394 "completed_nvme_io": 225, 00:18:21.394 "transports": [ 00:18:21.394 { 00:18:21.394 "trtype": "TCP" 00:18:21.394 } 00:18:21.394 ] 00:18:21.394 }, 00:18:21.394 { 00:18:21.394 "name": "nvmf_tgt_poll_group_002", 00:18:21.394 "admin_qpairs": 6, 00:18:21.394 "io_qpairs": 218, 00:18:21.394 "current_admin_qpairs": 0, 00:18:21.394 "current_io_qpairs": 0, 00:18:21.394 "pending_bdev_io": 0, 00:18:21.394 "completed_nvme_io": 223, 00:18:21.394 "transports": [ 00:18:21.394 { 00:18:21.394 "trtype": "TCP" 00:18:21.394 } 00:18:21.394 ] 00:18:21.394 }, 00:18:21.394 { 00:18:21.394 "name": "nvmf_tgt_poll_group_003", 00:18:21.394 "admin_qpairs": 0, 00:18:21.394 "io_qpairs": 224, 00:18:21.394 "current_admin_qpairs": 0, 00:18:21.394 "current_io_qpairs": 0, 00:18:21.394 "pending_bdev_io": 0, 00:18:21.394 "completed_nvme_io": 273, 00:18:21.394 "transports": [ 00:18:21.394 { 00:18:21.394 "trtype": "TCP" 00:18:21.394 } 00:18:21.394 ] 00:18:21.394 } 00:18:21.394 ] 00:18:21.394 }' 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:21.394 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # nvmfcleanup 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@99 -- # sync 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # set +e 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:18:21.655 rmmod nvme_tcp 00:18:21.655 rmmod nvme_fabrics 00:18:21.655 rmmod nvme_keyring 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # set -e 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # return 0 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # '[' -n 1914192 ']' 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@337 -- # killprocess 1914192 00:18:21.655 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1914192 ']' 00:18:21.656 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1914192 00:18:21.656 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:18:21.656 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.656 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1914192 00:18:21.656 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.656 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.656 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1914192' 00:18:21.656 killing process with pid 1914192 00:18:21.656 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1914192 00:18:21.656 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1914192 00:18:21.917 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:18:21.917 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # nvmf_fini 00:18:21.917 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@254 -- # local dev 00:18:21.917 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@257 -- # remove_target_ns 00:18:21.917 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:21.917 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:21.917 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@258 -- # delete_main_bridge 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@121 -- # return 0 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # _dev=0 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # dev_map=() 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@274 -- # iptr 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # iptables-save 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # iptables-restore 00:18:23.832 00:18:23.832 real 0m39.339s 00:18:23.832 user 1m54.807s 00:18:23.832 sys 0m8.785s 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:23.832 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.832 ************************************ 00:18:23.832 END TEST nvmf_rpc 00:18:23.832 ************************************ 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:24.094 ************************************ 00:18:24.094 START TEST nvmf_invalid 00:18:24.094 ************************************ 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:24.094 * Looking for test storage... 00:18:24.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.094 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:24.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.094 --rc genhtml_branch_coverage=1 00:18:24.094 --rc genhtml_function_coverage=1 00:18:24.094 --rc genhtml_legend=1 00:18:24.094 --rc geninfo_all_blocks=1 00:18:24.094 --rc geninfo_unexecuted_blocks=1 00:18:24.094 00:18:24.094 ' 00:18:24.095 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:24.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.095 --rc genhtml_branch_coverage=1 00:18:24.095 --rc genhtml_function_coverage=1 00:18:24.095 --rc genhtml_legend=1 00:18:24.095 --rc geninfo_all_blocks=1 00:18:24.095 --rc geninfo_unexecuted_blocks=1 00:18:24.095 00:18:24.095 ' 00:18:24.095 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:24.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.095 --rc genhtml_branch_coverage=1 00:18:24.095 --rc genhtml_function_coverage=1 00:18:24.095 --rc genhtml_legend=1 00:18:24.095 --rc geninfo_all_blocks=1 00:18:24.095 --rc geninfo_unexecuted_blocks=1 00:18:24.095 00:18:24.095 ' 00:18:24.095 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:24.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.095 --rc genhtml_branch_coverage=1 00:18:24.095 --rc genhtml_function_coverage=1 00:18:24.095 --rc genhtml_legend=1 00:18:24.095 --rc geninfo_all_blocks=1 00:18:24.095 --rc geninfo_unexecuted_blocks=1 00:18:24.095 00:18:24.095 ' 00:18:24.095 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.095 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@50 -- # : 0 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:24.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:18:24.356 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.357 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # prepare_net_devs 00:18:24.357 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:18:24.357 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # remove_target_ns 00:18:24.357 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:24.357 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:24.357 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:24.357 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:18:24.357 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:18:24.357 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # xtrace_disable 00:18:24.357 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # pci_devs=() 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # local -a pci_devs 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # pci_drivers=() 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # net_devs=() 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # local -ga net_devs 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # e810=() 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # local -ga e810 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # x722=() 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # local -ga x722 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # mlx=() 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # local -ga mlx 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:18:32.502 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:32.503 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:32.503 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:32.503 Found net devices under 0000:31:00.0: cvl_0_0 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:32.503 Found net devices under 0000:31:00.1: cvl_0_1 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # is_hw=yes 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@247 -- # create_target_ns 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@28 -- # local -g _dev 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # ips=() 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:18:32.503 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772161 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:18:32.766 10.0.0.1 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772162 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:18:32.766 10.0.0.2 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@38 -- # ping_ips 1 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:18:32.766 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:18:32.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.607 ms 00:18:32.767 00:18:32.767 --- 10.0.0.1 ping statistics --- 00:18:32.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.767 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:18:32.767 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:18:32.767 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:32.767 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:32.767 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:32.767 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:32.767 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:32.767 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target0 00:18:32.767 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:32.767 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:32.767 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:32.767 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:32.767 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:32.767 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:18:33.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:18:33.030 00:18:33.030 --- 10.0.0.2 ping statistics --- 00:18:33.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.030 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # return 0 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # return 1 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev= 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@160 -- # return 0 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target0 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:33.030 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target1 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # return 1 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev= 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@160 -- # return 0 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # nvmfpid=1925129 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # waitforlisten 1925129 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1925129 ']' 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.031 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:33.031 [2024-11-20 08:15:37.691442] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:18:33.031 [2024-11-20 08:15:37.691514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.291 [2024-11-20 08:15:37.787944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:33.291 [2024-11-20 08:15:37.829851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.291 [2024-11-20 08:15:37.829904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.291 [2024-11-20 08:15:37.829913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.291 [2024-11-20 08:15:37.829923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.291 [2024-11-20 08:15:37.829929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.291 [2024-11-20 08:15:37.831564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.291 [2024-11-20 08:15:37.831683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.291 [2024-11-20 08:15:37.831841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.291 [2024-11-20 08:15:37.831842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:33.861 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.861 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:18:33.861 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:33.861 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:33.861 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:33.861 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.861 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:33.861 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27518 00:18:34.122 [2024-11-20 08:15:38.690943] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:34.122 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:34.122 { 00:18:34.122 "nqn": "nqn.2016-06.io.spdk:cnode27518", 00:18:34.122 "tgt_name": "foobar", 00:18:34.122 "method": "nvmf_create_subsystem", 00:18:34.122 "req_id": 1 00:18:34.122 } 00:18:34.122 Got JSON-RPC error response 00:18:34.122 response: 00:18:34.122 { 00:18:34.122 "code": -32603, 00:18:34.122 "message": "Unable to find target foobar" 00:18:34.122 }' 00:18:34.122 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:34.122 { 00:18:34.122 "nqn": "nqn.2016-06.io.spdk:cnode27518", 00:18:34.122 "tgt_name": "foobar", 00:18:34.122 "method": "nvmf_create_subsystem", 00:18:34.122 "req_id": 1 00:18:34.122 } 00:18:34.122 Got JSON-RPC error response 00:18:34.122 response: 00:18:34.122 { 00:18:34.122 "code": -32603, 00:18:34.122 "message": "Unable to find target foobar" 00:18:34.122 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:34.122 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:34.122 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17411 00:18:34.383 [2024-11-20 08:15:38.883622] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17411: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:34.383 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:34.383 { 00:18:34.383 "nqn": "nqn.2016-06.io.spdk:cnode17411", 00:18:34.383 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:34.383 "method": "nvmf_create_subsystem", 00:18:34.383 "req_id": 1 00:18:34.383 } 00:18:34.383 Got JSON-RPC error response 00:18:34.383 response: 00:18:34.383 { 00:18:34.383 "code": -32602, 00:18:34.383 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:34.383 }' 00:18:34.383 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:34.383 { 00:18:34.383 "nqn": "nqn.2016-06.io.spdk:cnode17411", 00:18:34.383 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:34.383 "method": "nvmf_create_subsystem", 00:18:34.383 "req_id": 1 00:18:34.383 } 00:18:34.383 Got JSON-RPC error response 00:18:34.383 response: 00:18:34.383 { 00:18:34.383 "code": -32602, 00:18:34.383 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:34.383 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:34.383 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:34.383 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29913 00:18:34.383 [2024-11-20 08:15:39.068156] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29913: invalid model number 'SPDK_Controller' 00:18:34.383 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:34.383 { 00:18:34.383 "nqn": "nqn.2016-06.io.spdk:cnode29913", 00:18:34.383 "model_number": "SPDK_Controller\u001f", 00:18:34.383 "method": "nvmf_create_subsystem", 00:18:34.383 "req_id": 1 00:18:34.383 } 00:18:34.383 Got JSON-RPC error response 00:18:34.383 response: 00:18:34.383 { 00:18:34.383 "code": -32602, 00:18:34.383 "message": "Invalid MN SPDK_Controller\u001f" 00:18:34.383 }' 00:18:34.383 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:34.383 { 00:18:34.383 "nqn": "nqn.2016-06.io.spdk:cnode29913", 00:18:34.383 "model_number": "SPDK_Controller\u001f", 00:18:34.383 "method": "nvmf_create_subsystem", 00:18:34.383 "req_id": 1 00:18:34.383 } 00:18:34.383 Got JSON-RPC error response 00:18:34.383 response: 00:18:34.383 { 00:18:34.383 "code": -32602, 00:18:34.383 "message": "Invalid MN SPDK_Controller\u001f" 00:18:34.383 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:34.383 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:34.383 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:34.383 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:34.383 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:34.383 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:34.383 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:34.383 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.643 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ J == \- ]] 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'J@f/-d%aDuq!Oum{6o4FG' 00:18:34.644 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'J@f/-d%aDuq!Oum{6o4FG' nqn.2016-06.io.spdk:cnode5977 00:18:34.905 [2024-11-20 08:15:39.425329] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5977: invalid serial number 'J@f/-d%aDuq!Oum{6o4FG' 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:34.905 { 00:18:34.905 "nqn": "nqn.2016-06.io.spdk:cnode5977", 00:18:34.905 "serial_number": "J@f/-d%aDuq!Oum{6o4FG", 00:18:34.905 "method": "nvmf_create_subsystem", 00:18:34.905 "req_id": 1 00:18:34.905 } 00:18:34.905 Got JSON-RPC error response 00:18:34.905 response: 00:18:34.905 { 00:18:34.905 "code": -32602, 00:18:34.905 "message": "Invalid SN J@f/-d%aDuq!Oum{6o4FG" 00:18:34.905 }' 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:34.905 { 00:18:34.905 "nqn": "nqn.2016-06.io.spdk:cnode5977", 00:18:34.905 "serial_number": "J@f/-d%aDuq!Oum{6o4FG", 00:18:34.905 "method": "nvmf_create_subsystem", 00:18:34.905 "req_id": 1 00:18:34.905 } 00:18:34.905 Got JSON-RPC error response 00:18:34.905 response: 00:18:34.905 { 00:18:34.905 "code": -32602, 00:18:34.905 "message": "Invalid SN J@f/-d%aDuq!Oum{6o4FG" 00:18:34.905 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:34.905 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:34.906 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.167 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Q == \- ]] 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Qm]9e:Mbiuk~)i8GVUP\TS#hu32kC.;.^4H`&V'\''0Z' 00:18:35.168 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Qm]9e:Mbiuk~)i8GVUP\TS#hu32kC.;.^4H`&V'\''0Z' nqn.2016-06.io.spdk:cnode11882 00:18:35.428 [2024-11-20 08:15:39.938970] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11882: invalid model number 'Qm]9e:Mbiuk~)i8GVUP\TS#hu32kC.;.^4H`&V'0Z' 00:18:35.428 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:35.428 { 00:18:35.428 "nqn": "nqn.2016-06.io.spdk:cnode11882", 00:18:35.428 "model_number": "Qm]9e:Mbiuk~)i8GVUP\\TS#hu32kC.;.^4H`&V'\''0Z", 00:18:35.428 "method": "nvmf_create_subsystem", 00:18:35.428 "req_id": 1 00:18:35.428 } 00:18:35.428 Got JSON-RPC error response 00:18:35.428 response: 00:18:35.428 { 00:18:35.428 "code": -32602, 00:18:35.428 "message": "Invalid MN Qm]9e:Mbiuk~)i8GVUP\\TS#hu32kC.;.^4H`&V'\''0Z" 00:18:35.428 }' 00:18:35.428 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:35.428 { 00:18:35.428 "nqn": "nqn.2016-06.io.spdk:cnode11882", 00:18:35.428 "model_number": "Qm]9e:Mbiuk~)i8GVUP\\TS#hu32kC.;.^4H`&V'0Z", 00:18:35.428 "method": "nvmf_create_subsystem", 00:18:35.428 "req_id": 1 00:18:35.428 } 00:18:35.428 Got JSON-RPC error response 00:18:35.428 response: 00:18:35.428 { 00:18:35.428 "code": -32602, 00:18:35.428 "message": "Invalid MN Qm]9e:Mbiuk~)i8GVUP\\TS#hu32kC.;.^4H`&V'0Z" 00:18:35.428 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:35.428 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:35.428 [2024-11-20 08:15:40.123647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.689 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:35.689 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a 10.0.0.1 -s 4421 00:18:35.950 [2024-11-20 08:15:40.496772] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:35.950 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # out='request: 00:18:35.950 { 00:18:35.950 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:35.950 "listen_address": { 00:18:35.950 "trtype": "tcp", 00:18:35.950 "traddr": "10.0.0.1", 00:18:35.950 "trsvcid": "4421" 00:18:35.950 }, 00:18:35.950 "method": "nvmf_subsystem_remove_listener", 00:18:35.950 "req_id": 1 00:18:35.950 } 00:18:35.950 Got JSON-RPC error response 00:18:35.950 response: 00:18:35.950 { 00:18:35.950 "code": -32602, 00:18:35.950 "message": "Invalid parameters" 00:18:35.950 }' 00:18:35.950 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@65 -- # [[ request: 00:18:35.950 { 00:18:35.950 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:35.950 "listen_address": { 00:18:35.950 "trtype": "tcp", 00:18:35.950 "traddr": "10.0.0.1", 00:18:35.950 "trsvcid": "4421" 00:18:35.950 }, 00:18:35.950 "method": "nvmf_subsystem_remove_listener", 00:18:35.950 "req_id": 1 00:18:35.950 } 00:18:35.950 Got JSON-RPC error response 00:18:35.950 response: 00:18:35.950 { 00:18:35.950 "code": -32602, 00:18:35.950 "message": "Invalid parameters" 00:18:35.950 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:35.950 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13249 -i 0 00:18:36.210 [2024-11-20 08:15:40.685352] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13249: invalid cntlid range [0-65519] 00:18:36.210 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@68 -- # out='request: 00:18:36.210 { 00:18:36.210 "nqn": "nqn.2016-06.io.spdk:cnode13249", 00:18:36.210 "min_cntlid": 0, 00:18:36.210 "method": "nvmf_create_subsystem", 00:18:36.210 "req_id": 1 00:18:36.210 } 00:18:36.210 Got JSON-RPC error response 00:18:36.210 response: 00:18:36.210 { 00:18:36.210 "code": -32602, 00:18:36.210 "message": "Invalid cntlid range [0-65519]" 00:18:36.210 }' 00:18:36.210 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # [[ request: 00:18:36.210 { 00:18:36.210 "nqn": "nqn.2016-06.io.spdk:cnode13249", 00:18:36.210 "min_cntlid": 0, 00:18:36.210 "method": "nvmf_create_subsystem", 00:18:36.210 "req_id": 1 00:18:36.210 } 00:18:36.210 Got JSON-RPC error response 00:18:36.210 response: 00:18:36.210 { 00:18:36.210 "code": -32602, 00:18:36.210 "message": "Invalid cntlid range [0-65519]" 00:18:36.210 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:36.210 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8583 -i 65520 00:18:36.210 [2024-11-20 08:15:40.869949] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8583: invalid cntlid range [65520-65519] 00:18:36.210 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # out='request: 00:18:36.210 { 00:18:36.210 "nqn": "nqn.2016-06.io.spdk:cnode8583", 00:18:36.210 "min_cntlid": 65520, 00:18:36.210 "method": "nvmf_create_subsystem", 00:18:36.210 "req_id": 1 00:18:36.210 } 00:18:36.210 Got JSON-RPC error response 00:18:36.210 response: 00:18:36.210 { 00:18:36.210 "code": -32602, 00:18:36.210 "message": "Invalid cntlid range [65520-65519]" 00:18:36.210 }' 00:18:36.210 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@71 -- # [[ request: 00:18:36.210 { 00:18:36.210 "nqn": "nqn.2016-06.io.spdk:cnode8583", 00:18:36.210 "min_cntlid": 65520, 00:18:36.210 "method": "nvmf_create_subsystem", 00:18:36.210 "req_id": 1 00:18:36.210 } 00:18:36.210 Got JSON-RPC error response 00:18:36.210 response: 00:18:36.210 { 00:18:36.210 "code": -32602, 00:18:36.210 "message": "Invalid cntlid range [65520-65519]" 00:18:36.210 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:36.210 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8765 -I 0 00:18:36.471 [2024-11-20 08:15:41.062592] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8765: invalid cntlid range [1-0] 00:18:36.471 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@72 -- # out='request: 00:18:36.471 { 00:18:36.471 "nqn": "nqn.2016-06.io.spdk:cnode8765", 00:18:36.471 "max_cntlid": 0, 00:18:36.471 "method": "nvmf_create_subsystem", 00:18:36.471 "req_id": 1 00:18:36.471 } 00:18:36.471 Got JSON-RPC error response 00:18:36.471 response: 00:18:36.471 { 00:18:36.471 "code": -32602, 00:18:36.471 "message": "Invalid cntlid range [1-0]" 00:18:36.471 }' 00:18:36.471 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # [[ request: 00:18:36.471 { 00:18:36.471 "nqn": "nqn.2016-06.io.spdk:cnode8765", 00:18:36.471 "max_cntlid": 0, 00:18:36.471 "method": "nvmf_create_subsystem", 00:18:36.471 "req_id": 1 00:18:36.471 } 00:18:36.471 Got JSON-RPC error response 00:18:36.471 response: 00:18:36.471 { 00:18:36.471 "code": -32602, 00:18:36.471 "message": "Invalid cntlid range [1-0]" 00:18:36.471 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:36.471 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14837 -I 65520 00:18:36.732 [2024-11-20 08:15:41.251193] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14837: invalid cntlid range [1-65520] 00:18:36.732 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # out='request: 00:18:36.732 { 00:18:36.732 "nqn": "nqn.2016-06.io.spdk:cnode14837", 00:18:36.732 "max_cntlid": 65520, 00:18:36.732 "method": "nvmf_create_subsystem", 00:18:36.732 "req_id": 1 00:18:36.732 } 00:18:36.732 Got JSON-RPC error response 00:18:36.732 response: 00:18:36.732 { 00:18:36.732 "code": -32602, 00:18:36.732 "message": "Invalid cntlid range [1-65520]" 00:18:36.732 }' 00:18:36.732 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # [[ request: 00:18:36.732 { 00:18:36.732 "nqn": "nqn.2016-06.io.spdk:cnode14837", 00:18:36.732 "max_cntlid": 65520, 00:18:36.732 "method": "nvmf_create_subsystem", 00:18:36.732 "req_id": 1 00:18:36.732 } 00:18:36.732 Got JSON-RPC error response 00:18:36.732 response: 00:18:36.732 { 00:18:36.732 "code": -32602, 00:18:36.732 "message": "Invalid cntlid range [1-65520]" 00:18:36.732 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:36.732 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31929 -i 6 -I 5 00:18:36.732 [2024-11-20 08:15:41.439828] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31929: invalid cntlid range [6-5] 00:18:36.992 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # out='request: 00:18:36.992 { 00:18:36.992 "nqn": "nqn.2016-06.io.spdk:cnode31929", 00:18:36.992 "min_cntlid": 6, 00:18:36.992 "max_cntlid": 5, 00:18:36.992 "method": "nvmf_create_subsystem", 00:18:36.992 "req_id": 1 00:18:36.992 } 00:18:36.992 Got JSON-RPC error response 00:18:36.992 response: 00:18:36.992 { 00:18:36.992 "code": -32602, 00:18:36.992 "message": "Invalid cntlid range [6-5]" 00:18:36.992 }' 00:18:36.992 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # [[ request: 00:18:36.992 { 00:18:36.992 "nqn": "nqn.2016-06.io.spdk:cnode31929", 00:18:36.992 "min_cntlid": 6, 00:18:36.992 "max_cntlid": 5, 00:18:36.992 "method": "nvmf_create_subsystem", 00:18:36.992 "req_id": 1 00:18:36.992 } 00:18:36.993 Got JSON-RPC error response 00:18:36.993 response: 00:18:36.993 { 00:18:36.993 "code": -32602, 00:18:36.993 "message": "Invalid cntlid range [6-5]" 00:18:36.993 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@82 -- # out='request: 00:18:36.993 { 00:18:36.993 "name": "foobar", 00:18:36.993 "method": "nvmf_delete_target", 00:18:36.993 "req_id": 1 00:18:36.993 } 00:18:36.993 Got JSON-RPC error response 00:18:36.993 response: 00:18:36.993 { 00:18:36.993 "code": -32602, 00:18:36.993 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:36.993 }' 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # [[ request: 00:18:36.993 { 00:18:36.993 "name": "foobar", 00:18:36.993 "method": "nvmf_delete_target", 00:18:36.993 "req_id": 1 00:18:36.993 } 00:18:36.993 Got JSON-RPC error response 00:18:36.993 response: 00:18:36.993 { 00:18:36.993 "code": -32602, 00:18:36.993 "message": "The specified target doesn't exist, cannot delete it." 00:18:36.993 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@86 -- # nvmftestfini 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # nvmfcleanup 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@99 -- # sync 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # set +e 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:18:36.993 rmmod nvme_tcp 00:18:36.993 rmmod nvme_fabrics 00:18:36.993 rmmod nvme_keyring 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # set -e 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # return 0 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # '[' -n 1925129 ']' 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@337 -- # killprocess 1925129 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1925129 ']' 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1925129 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.993 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1925129 00:18:37.253 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.253 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.253 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1925129' 00:18:37.253 killing process with pid 1925129 00:18:37.253 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1925129 00:18:37.253 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1925129 00:18:37.253 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:18:37.253 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # nvmf_fini 00:18:37.253 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@254 -- # local dev 00:18:37.253 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@257 -- # remove_target_ns 00:18:37.253 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:37.253 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:37.253 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@258 -- # delete_main_bridge 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@121 -- # return 0 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # _dev=0 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # dev_map=() 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@274 -- # iptr 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # iptables-restore 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # iptables-save 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:18:39.795 00:18:39.795 real 0m15.340s 00:18:39.795 user 0m21.021s 00:18:39.795 sys 0m7.603s 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:39.795 ************************************ 00:18:39.795 END TEST nvmf_invalid 00:18:39.795 ************************************ 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.795 08:15:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:39.795 ************************************ 00:18:39.795 START TEST nvmf_connect_stress 00:18:39.795 ************************************ 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:39.795 * Looking for test storage... 00:18:39.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:39.795 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:39.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.796 --rc genhtml_branch_coverage=1 00:18:39.796 --rc genhtml_function_coverage=1 00:18:39.796 --rc genhtml_legend=1 00:18:39.796 --rc geninfo_all_blocks=1 00:18:39.796 --rc geninfo_unexecuted_blocks=1 00:18:39.796 00:18:39.796 ' 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:39.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.796 --rc genhtml_branch_coverage=1 00:18:39.796 --rc genhtml_function_coverage=1 00:18:39.796 --rc genhtml_legend=1 00:18:39.796 --rc geninfo_all_blocks=1 00:18:39.796 --rc geninfo_unexecuted_blocks=1 00:18:39.796 00:18:39.796 ' 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:39.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.796 --rc genhtml_branch_coverage=1 00:18:39.796 --rc genhtml_function_coverage=1 00:18:39.796 --rc genhtml_legend=1 00:18:39.796 --rc geninfo_all_blocks=1 00:18:39.796 --rc geninfo_unexecuted_blocks=1 00:18:39.796 00:18:39.796 ' 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:39.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.796 --rc genhtml_branch_coverage=1 00:18:39.796 --rc genhtml_function_coverage=1 00:18:39.796 --rc genhtml_legend=1 00:18:39.796 --rc geninfo_all_blocks=1 00:18:39.796 --rc geninfo_unexecuted_blocks=1 00:18:39.796 00:18:39.796 ' 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@50 -- # : 0 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:39.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:18:39.796 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # net_devs=() 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # e810=() 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # local -ga e810 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # x722=() 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # local -ga x722 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # mlx=() 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.938 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:47.939 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:47.939 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:47.939 Found net devices under 0000:31:00.0: cvl_0_0 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:47.939 Found net devices under 0000:31:00.1: cvl_0_1 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@247 -- # create_target_ns 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # ips=() 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:47.939 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:18:47.940 10.0.0.1 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:18:47.940 10.0.0.2 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:18:47.940 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:18:48.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.742 ms 00:18:48.202 00:18:48.202 --- 10.0.0.1 ping statistics --- 00:18:48.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.202 rtt min/avg/max/mdev = 0.742/0.742/0.742/0.000 ms 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:18:48.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:18:48.202 00:18:48.202 --- 10.0.0.2 ping statistics --- 00:18:48.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.202 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # return 0 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:48.202 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # return 1 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev= 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@160 -- # return 0 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # return 1 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev= 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@160 -- # return 0 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # nvmfpid=1930879 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # waitforlisten 1930879 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1930879 ']' 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.203 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.464 [2024-11-20 08:15:52.942530] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:18:48.464 [2024-11-20 08:15:52.942599] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.464 [2024-11-20 08:15:53.056730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:48.464 [2024-11-20 08:15:53.108074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.464 [2024-11-20 08:15:53.108125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.464 [2024-11-20 08:15:53.108134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.464 [2024-11-20 08:15:53.108141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.464 [2024-11-20 08:15:53.108148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.464 [2024-11-20 08:15:53.109998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.464 [2024-11-20 08:15:53.110265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.464 [2024-11-20 08:15:53.110268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.034 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.034 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:18:49.034 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:49.034 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.034 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.296 [2024-11-20 08:15:53.784432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.296 [2024-11-20 08:15:53.808810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.296 NULL1 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1931172 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.296 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.557 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.557 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:49.557 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.557 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.557 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.129 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.129 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:50.129 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.129 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.129 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.390 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.390 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:50.390 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.390 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.390 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.650 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.650 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:50.650 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.651 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.651 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.911 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.911 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:50.911 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.911 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.911 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:51.172 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.172 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:51.172 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.172 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.172 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:51.743 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.743 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:51.743 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.743 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.743 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.004 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.004 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:52.004 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.004 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.004 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.265 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.265 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:52.265 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.265 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.265 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.525 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.525 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:52.525 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.525 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.525 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.785 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.785 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:52.785 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.785 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.785 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:53.355 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.355 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:53.355 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:53.355 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.355 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:53.616 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.616 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:53.616 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:53.616 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.616 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:53.878 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.878 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:53.878 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:53.878 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.878 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:54.140 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.140 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:54.140 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:54.140 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.140 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:54.710 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.710 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:54.710 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:54.710 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.710 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:54.970 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.970 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:54.970 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:54.970 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.970 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.230 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.230 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:55.230 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:55.230 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.230 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.491 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.491 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:55.491 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:55.491 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.491 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.752 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.752 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:55.752 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:55.752 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.753 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.325 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.325 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:56.325 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:56.325 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.325 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.586 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.586 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:56.586 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:56.586 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.586 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.847 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.847 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:56.847 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:56.847 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.847 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.107 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.107 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:57.107 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.107 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.107 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.368 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.368 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:57.368 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.368 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.368 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.938 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.938 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:57.938 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.938 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.938 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:58.199 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.199 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:58.199 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:58.199 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.199 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:58.459 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.459 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:58.459 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:58.459 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.459 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:58.719 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.719 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:58.719 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:58.719 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.719 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:58.979 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.979 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:58.980 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:58.980 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.980 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:59.240 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1931172 00:18:59.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1931172) - No such process 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1931172 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@99 -- # sync 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # set +e 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:18:59.502 rmmod nvme_tcp 00:18:59.502 rmmod nvme_fabrics 00:18:59.502 rmmod nvme_keyring 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # set -e 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # return 0 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # '[' -n 1930879 ']' 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@337 -- # killprocess 1930879 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1930879 ']' 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1930879 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1930879 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1930879' 00:18:59.502 killing process with pid 1930879 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1930879 00:18:59.502 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1930879 00:18:59.763 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:18:59.763 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:18:59.763 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@254 -- # local dev 00:18:59.763 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:18:59.763 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:59.763 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:59.763 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@121 -- # return 0 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # _dev=0 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@274 -- # iptr 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # iptables-save 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # iptables-restore 00:19:01.675 00:19:01.675 real 0m22.328s 00:19:01.675 user 0m42.580s 00:19:01.675 sys 0m9.853s 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:01.675 ************************************ 00:19:01.675 END TEST nvmf_connect_stress 00:19:01.675 ************************************ 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.675 08:16:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:01.936 ************************************ 00:19:01.936 START TEST nvmf_fused_ordering 00:19:01.936 ************************************ 00:19:01.936 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:01.936 * Looking for test storage... 00:19:01.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:01.936 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:01.936 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:19:01.936 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:01.936 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:01.936 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:01.936 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:01.936 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:01.936 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:19:01.936 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:19:01.936 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:19:01.936 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:19:01.936 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:19:01.936 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:01.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.937 --rc genhtml_branch_coverage=1 00:19:01.937 --rc genhtml_function_coverage=1 00:19:01.937 --rc genhtml_legend=1 00:19:01.937 --rc geninfo_all_blocks=1 00:19:01.937 --rc geninfo_unexecuted_blocks=1 00:19:01.937 00:19:01.937 ' 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:01.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.937 --rc genhtml_branch_coverage=1 00:19:01.937 --rc genhtml_function_coverage=1 00:19:01.937 --rc genhtml_legend=1 00:19:01.937 --rc geninfo_all_blocks=1 00:19:01.937 --rc geninfo_unexecuted_blocks=1 00:19:01.937 00:19:01.937 ' 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:01.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.937 --rc genhtml_branch_coverage=1 00:19:01.937 --rc genhtml_function_coverage=1 00:19:01.937 --rc genhtml_legend=1 00:19:01.937 --rc geninfo_all_blocks=1 00:19:01.937 --rc geninfo_unexecuted_blocks=1 00:19:01.937 00:19:01.937 ' 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:01.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.937 --rc genhtml_branch_coverage=1 00:19:01.937 --rc genhtml_function_coverage=1 00:19:01.937 --rc genhtml_legend=1 00:19:01.937 --rc geninfo_all_blocks=1 00:19:01.937 --rc geninfo_unexecuted_blocks=1 00:19:01.937 00:19:01.937 ' 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:01.937 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@50 -- # : 0 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:02.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # remove_target_ns 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # xtrace_disable 00:19:02.198 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # pci_devs=() 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # net_devs=() 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # e810=() 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # local -ga e810 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # x722=() 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # local -ga x722 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # mlx=() 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # local -ga mlx 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:10.504 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:10.504 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:10.504 Found net devices under 0000:31:00.0: cvl_0_0 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:10.504 Found net devices under 0000:31:00.1: cvl_0_1 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # is_hw=yes 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@247 -- # create_target_ns 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@28 -- # local -g _dev 00:19:10.504 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # ips=() 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:19:10.505 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772161 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:19:10.505 10.0.0.1 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772162 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:19:10.505 10.0.0.2 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:10.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.616 ms 00:19:10.505 00:19:10.505 --- 10.0.0.1 ping statistics --- 00:19:10.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.505 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target0 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:10.505 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:10.506 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:10.506 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:10.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:19:10.768 00:19:10.768 --- 10.0.0.2 ping statistics --- 00:19:10.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.768 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # return 0 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:10.768 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # return 1 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev= 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@160 -- # return 0 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target0 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target1 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # return 1 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev= 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@160 -- # return 0 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # nvmfpid=1937975 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # waitforlisten 1937975 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1937975 ']' 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.769 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:10.769 [2024-11-20 08:16:15.415095] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:19:10.769 [2024-11-20 08:16:15.415172] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.030 [2024-11-20 08:16:15.523797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.030 [2024-11-20 08:16:15.573213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.030 [2024-11-20 08:16:15.573264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.030 [2024-11-20 08:16:15.573272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.030 [2024-11-20 08:16:15.573280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.030 [2024-11-20 08:16:15.573286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.030 [2024-11-20 08:16:15.574090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:11.603 [2024-11-20 08:16:16.282630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:11.603 [2024-11-20 08:16:16.298951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:11.603 NULL1 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.603 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:11.864 [2024-11-20 08:16:16.355822] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:19:11.864 [2024-11-20 08:16:16.355881] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1938100 ] 00:19:12.125 Attached to nqn.2016-06.io.spdk:cnode1 00:19:12.125 Namespace ID: 1 size: 1GB 00:19:12.125 fused_ordering(0) 00:19:12.125 fused_ordering(1) 00:19:12.125 fused_ordering(2) 00:19:12.125 fused_ordering(3) 00:19:12.125 fused_ordering(4) 00:19:12.125 fused_ordering(5) 00:19:12.125 fused_ordering(6) 00:19:12.125 fused_ordering(7) 00:19:12.125 fused_ordering(8) 00:19:12.125 fused_ordering(9) 00:19:12.125 fused_ordering(10) 00:19:12.125 fused_ordering(11) 00:19:12.125 fused_ordering(12) 00:19:12.125 fused_ordering(13) 00:19:12.125 fused_ordering(14) 00:19:12.125 fused_ordering(15) 00:19:12.125 fused_ordering(16) 00:19:12.125 fused_ordering(17) 00:19:12.125 fused_ordering(18) 00:19:12.125 fused_ordering(19) 00:19:12.125 fused_ordering(20) 00:19:12.125 fused_ordering(21) 00:19:12.125 fused_ordering(22) 00:19:12.125 fused_ordering(23) 00:19:12.125 fused_ordering(24) 00:19:12.125 fused_ordering(25) 00:19:12.125 fused_ordering(26) 00:19:12.125 fused_ordering(27) 00:19:12.125 fused_ordering(28) 00:19:12.125 fused_ordering(29) 00:19:12.125 fused_ordering(30) 00:19:12.125 fused_ordering(31) 00:19:12.125 fused_ordering(32) 00:19:12.125 fused_ordering(33) 00:19:12.125 fused_ordering(34) 00:19:12.125 fused_ordering(35) 00:19:12.125 fused_ordering(36) 00:19:12.125 fused_ordering(37) 00:19:12.125 fused_ordering(38) 00:19:12.125 fused_ordering(39) 00:19:12.125 fused_ordering(40) 00:19:12.125 fused_ordering(41) 00:19:12.125 fused_ordering(42) 00:19:12.125 fused_ordering(43) 00:19:12.125 fused_ordering(44) 00:19:12.125 fused_ordering(45) 00:19:12.125 fused_ordering(46) 00:19:12.125 fused_ordering(47) 00:19:12.125 fused_ordering(48) 00:19:12.125 fused_ordering(49) 00:19:12.125 fused_ordering(50) 00:19:12.125 fused_ordering(51) 00:19:12.125 fused_ordering(52) 00:19:12.125 fused_ordering(53) 00:19:12.125 fused_ordering(54) 00:19:12.125 fused_ordering(55) 00:19:12.125 fused_ordering(56) 00:19:12.125 fused_ordering(57) 00:19:12.125 fused_ordering(58) 00:19:12.125 fused_ordering(59) 00:19:12.125 fused_ordering(60) 00:19:12.125 fused_ordering(61) 00:19:12.125 fused_ordering(62) 00:19:12.125 fused_ordering(63) 00:19:12.125 fused_ordering(64) 00:19:12.125 fused_ordering(65) 00:19:12.125 fused_ordering(66) 00:19:12.125 fused_ordering(67) 00:19:12.125 fused_ordering(68) 00:19:12.125 fused_ordering(69) 00:19:12.125 fused_ordering(70) 00:19:12.125 fused_ordering(71) 00:19:12.125 fused_ordering(72) 00:19:12.125 fused_ordering(73) 00:19:12.125 fused_ordering(74) 00:19:12.125 fused_ordering(75) 00:19:12.125 fused_ordering(76) 00:19:12.125 fused_ordering(77) 00:19:12.125 fused_ordering(78) 00:19:12.125 fused_ordering(79) 00:19:12.125 fused_ordering(80) 00:19:12.125 fused_ordering(81) 00:19:12.125 fused_ordering(82) 00:19:12.125 fused_ordering(83) 00:19:12.125 fused_ordering(84) 00:19:12.125 fused_ordering(85) 00:19:12.125 fused_ordering(86) 00:19:12.125 fused_ordering(87) 00:19:12.125 fused_ordering(88) 00:19:12.125 fused_ordering(89) 00:19:12.125 fused_ordering(90) 00:19:12.125 fused_ordering(91) 00:19:12.125 fused_ordering(92) 00:19:12.125 fused_ordering(93) 00:19:12.125 fused_ordering(94) 00:19:12.125 fused_ordering(95) 00:19:12.125 fused_ordering(96) 00:19:12.125 fused_ordering(97) 00:19:12.125 fused_ordering(98) 00:19:12.125 fused_ordering(99) 00:19:12.125 fused_ordering(100) 00:19:12.125 fused_ordering(101) 00:19:12.125 fused_ordering(102) 00:19:12.125 fused_ordering(103) 00:19:12.125 fused_ordering(104) 00:19:12.125 fused_ordering(105) 00:19:12.125 fused_ordering(106) 00:19:12.125 fused_ordering(107) 00:19:12.125 fused_ordering(108) 00:19:12.125 fused_ordering(109) 00:19:12.125 fused_ordering(110) 00:19:12.125 fused_ordering(111) 00:19:12.125 fused_ordering(112) 00:19:12.125 fused_ordering(113) 00:19:12.125 fused_ordering(114) 00:19:12.125 fused_ordering(115) 00:19:12.125 fused_ordering(116) 00:19:12.125 fused_ordering(117) 00:19:12.125 fused_ordering(118) 00:19:12.125 fused_ordering(119) 00:19:12.125 fused_ordering(120) 00:19:12.125 fused_ordering(121) 00:19:12.125 fused_ordering(122) 00:19:12.125 fused_ordering(123) 00:19:12.125 fused_ordering(124) 00:19:12.125 fused_ordering(125) 00:19:12.125 fused_ordering(126) 00:19:12.125 fused_ordering(127) 00:19:12.125 fused_ordering(128) 00:19:12.125 fused_ordering(129) 00:19:12.125 fused_ordering(130) 00:19:12.125 fused_ordering(131) 00:19:12.125 fused_ordering(132) 00:19:12.125 fused_ordering(133) 00:19:12.125 fused_ordering(134) 00:19:12.125 fused_ordering(135) 00:19:12.125 fused_ordering(136) 00:19:12.125 fused_ordering(137) 00:19:12.125 fused_ordering(138) 00:19:12.125 fused_ordering(139) 00:19:12.125 fused_ordering(140) 00:19:12.125 fused_ordering(141) 00:19:12.125 fused_ordering(142) 00:19:12.125 fused_ordering(143) 00:19:12.125 fused_ordering(144) 00:19:12.125 fused_ordering(145) 00:19:12.125 fused_ordering(146) 00:19:12.125 fused_ordering(147) 00:19:12.125 fused_ordering(148) 00:19:12.125 fused_ordering(149) 00:19:12.125 fused_ordering(150) 00:19:12.125 fused_ordering(151) 00:19:12.125 fused_ordering(152) 00:19:12.125 fused_ordering(153) 00:19:12.125 fused_ordering(154) 00:19:12.126 fused_ordering(155) 00:19:12.126 fused_ordering(156) 00:19:12.126 fused_ordering(157) 00:19:12.126 fused_ordering(158) 00:19:12.126 fused_ordering(159) 00:19:12.126 fused_ordering(160) 00:19:12.126 fused_ordering(161) 00:19:12.126 fused_ordering(162) 00:19:12.126 fused_ordering(163) 00:19:12.126 fused_ordering(164) 00:19:12.126 fused_ordering(165) 00:19:12.126 fused_ordering(166) 00:19:12.126 fused_ordering(167) 00:19:12.126 fused_ordering(168) 00:19:12.126 fused_ordering(169) 00:19:12.126 fused_ordering(170) 00:19:12.126 fused_ordering(171) 00:19:12.126 fused_ordering(172) 00:19:12.126 fused_ordering(173) 00:19:12.126 fused_ordering(174) 00:19:12.126 fused_ordering(175) 00:19:12.126 fused_ordering(176) 00:19:12.126 fused_ordering(177) 00:19:12.126 fused_ordering(178) 00:19:12.126 fused_ordering(179) 00:19:12.126 fused_ordering(180) 00:19:12.126 fused_ordering(181) 00:19:12.126 fused_ordering(182) 00:19:12.126 fused_ordering(183) 00:19:12.126 fused_ordering(184) 00:19:12.126 fused_ordering(185) 00:19:12.126 fused_ordering(186) 00:19:12.126 fused_ordering(187) 00:19:12.126 fused_ordering(188) 00:19:12.126 fused_ordering(189) 00:19:12.126 fused_ordering(190) 00:19:12.126 fused_ordering(191) 00:19:12.126 fused_ordering(192) 00:19:12.126 fused_ordering(193) 00:19:12.126 fused_ordering(194) 00:19:12.126 fused_ordering(195) 00:19:12.126 fused_ordering(196) 00:19:12.126 fused_ordering(197) 00:19:12.126 fused_ordering(198) 00:19:12.126 fused_ordering(199) 00:19:12.126 fused_ordering(200) 00:19:12.126 fused_ordering(201) 00:19:12.126 fused_ordering(202) 00:19:12.126 fused_ordering(203) 00:19:12.126 fused_ordering(204) 00:19:12.126 fused_ordering(205) 00:19:12.696 fused_ordering(206) 00:19:12.696 fused_ordering(207) 00:19:12.696 fused_ordering(208) 00:19:12.696 fused_ordering(209) 00:19:12.696 fused_ordering(210) 00:19:12.696 fused_ordering(211) 00:19:12.696 fused_ordering(212) 00:19:12.696 fused_ordering(213) 00:19:12.696 fused_ordering(214) 00:19:12.696 fused_ordering(215) 00:19:12.696 fused_ordering(216) 00:19:12.696 fused_ordering(217) 00:19:12.696 fused_ordering(218) 00:19:12.696 fused_ordering(219) 00:19:12.696 fused_ordering(220) 00:19:12.696 fused_ordering(221) 00:19:12.696 fused_ordering(222) 00:19:12.696 fused_ordering(223) 00:19:12.696 fused_ordering(224) 00:19:12.696 fused_ordering(225) 00:19:12.696 fused_ordering(226) 00:19:12.696 fused_ordering(227) 00:19:12.696 fused_ordering(228) 00:19:12.696 fused_ordering(229) 00:19:12.696 fused_ordering(230) 00:19:12.696 fused_ordering(231) 00:19:12.696 fused_ordering(232) 00:19:12.696 fused_ordering(233) 00:19:12.696 fused_ordering(234) 00:19:12.696 fused_ordering(235) 00:19:12.696 fused_ordering(236) 00:19:12.696 fused_ordering(237) 00:19:12.696 fused_ordering(238) 00:19:12.696 fused_ordering(239) 00:19:12.696 fused_ordering(240) 00:19:12.696 fused_ordering(241) 00:19:12.696 fused_ordering(242) 00:19:12.696 fused_ordering(243) 00:19:12.696 fused_ordering(244) 00:19:12.696 fused_ordering(245) 00:19:12.696 fused_ordering(246) 00:19:12.696 fused_ordering(247) 00:19:12.696 fused_ordering(248) 00:19:12.696 fused_ordering(249) 00:19:12.696 fused_ordering(250) 00:19:12.696 fused_ordering(251) 00:19:12.696 fused_ordering(252) 00:19:12.696 fused_ordering(253) 00:19:12.696 fused_ordering(254) 00:19:12.696 fused_ordering(255) 00:19:12.696 fused_ordering(256) 00:19:12.696 fused_ordering(257) 00:19:12.696 fused_ordering(258) 00:19:12.696 fused_ordering(259) 00:19:12.696 fused_ordering(260) 00:19:12.696 fused_ordering(261) 00:19:12.696 fused_ordering(262) 00:19:12.696 fused_ordering(263) 00:19:12.696 fused_ordering(264) 00:19:12.696 fused_ordering(265) 00:19:12.696 fused_ordering(266) 00:19:12.696 fused_ordering(267) 00:19:12.696 fused_ordering(268) 00:19:12.696 fused_ordering(269) 00:19:12.696 fused_ordering(270) 00:19:12.696 fused_ordering(271) 00:19:12.696 fused_ordering(272) 00:19:12.696 fused_ordering(273) 00:19:12.696 fused_ordering(274) 00:19:12.696 fused_ordering(275) 00:19:12.696 fused_ordering(276) 00:19:12.696 fused_ordering(277) 00:19:12.697 fused_ordering(278) 00:19:12.697 fused_ordering(279) 00:19:12.697 fused_ordering(280) 00:19:12.697 fused_ordering(281) 00:19:12.697 fused_ordering(282) 00:19:12.697 fused_ordering(283) 00:19:12.697 fused_ordering(284) 00:19:12.697 fused_ordering(285) 00:19:12.697 fused_ordering(286) 00:19:12.697 fused_ordering(287) 00:19:12.697 fused_ordering(288) 00:19:12.697 fused_ordering(289) 00:19:12.697 fused_ordering(290) 00:19:12.697 fused_ordering(291) 00:19:12.697 fused_ordering(292) 00:19:12.697 fused_ordering(293) 00:19:12.697 fused_ordering(294) 00:19:12.697 fused_ordering(295) 00:19:12.697 fused_ordering(296) 00:19:12.697 fused_ordering(297) 00:19:12.697 fused_ordering(298) 00:19:12.697 fused_ordering(299) 00:19:12.697 fused_ordering(300) 00:19:12.697 fused_ordering(301) 00:19:12.697 fused_ordering(302) 00:19:12.697 fused_ordering(303) 00:19:12.697 fused_ordering(304) 00:19:12.697 fused_ordering(305) 00:19:12.697 fused_ordering(306) 00:19:12.697 fused_ordering(307) 00:19:12.697 fused_ordering(308) 00:19:12.697 fused_ordering(309) 00:19:12.697 fused_ordering(310) 00:19:12.697 fused_ordering(311) 00:19:12.697 fused_ordering(312) 00:19:12.697 fused_ordering(313) 00:19:12.697 fused_ordering(314) 00:19:12.697 fused_ordering(315) 00:19:12.697 fused_ordering(316) 00:19:12.697 fused_ordering(317) 00:19:12.697 fused_ordering(318) 00:19:12.697 fused_ordering(319) 00:19:12.697 fused_ordering(320) 00:19:12.697 fused_ordering(321) 00:19:12.697 fused_ordering(322) 00:19:12.697 fused_ordering(323) 00:19:12.697 fused_ordering(324) 00:19:12.697 fused_ordering(325) 00:19:12.697 fused_ordering(326) 00:19:12.697 fused_ordering(327) 00:19:12.697 fused_ordering(328) 00:19:12.697 fused_ordering(329) 00:19:12.697 fused_ordering(330) 00:19:12.697 fused_ordering(331) 00:19:12.697 fused_ordering(332) 00:19:12.697 fused_ordering(333) 00:19:12.697 fused_ordering(334) 00:19:12.697 fused_ordering(335) 00:19:12.697 fused_ordering(336) 00:19:12.697 fused_ordering(337) 00:19:12.697 fused_ordering(338) 00:19:12.697 fused_ordering(339) 00:19:12.697 fused_ordering(340) 00:19:12.697 fused_ordering(341) 00:19:12.697 fused_ordering(342) 00:19:12.697 fused_ordering(343) 00:19:12.697 fused_ordering(344) 00:19:12.697 fused_ordering(345) 00:19:12.697 fused_ordering(346) 00:19:12.697 fused_ordering(347) 00:19:12.697 fused_ordering(348) 00:19:12.697 fused_ordering(349) 00:19:12.697 fused_ordering(350) 00:19:12.697 fused_ordering(351) 00:19:12.697 fused_ordering(352) 00:19:12.697 fused_ordering(353) 00:19:12.697 fused_ordering(354) 00:19:12.697 fused_ordering(355) 00:19:12.697 fused_ordering(356) 00:19:12.697 fused_ordering(357) 00:19:12.697 fused_ordering(358) 00:19:12.697 fused_ordering(359) 00:19:12.697 fused_ordering(360) 00:19:12.697 fused_ordering(361) 00:19:12.697 fused_ordering(362) 00:19:12.697 fused_ordering(363) 00:19:12.697 fused_ordering(364) 00:19:12.697 fused_ordering(365) 00:19:12.697 fused_ordering(366) 00:19:12.697 fused_ordering(367) 00:19:12.697 fused_ordering(368) 00:19:12.697 fused_ordering(369) 00:19:12.697 fused_ordering(370) 00:19:12.697 fused_ordering(371) 00:19:12.697 fused_ordering(372) 00:19:12.697 fused_ordering(373) 00:19:12.697 fused_ordering(374) 00:19:12.697 fused_ordering(375) 00:19:12.697 fused_ordering(376) 00:19:12.697 fused_ordering(377) 00:19:12.697 fused_ordering(378) 00:19:12.697 fused_ordering(379) 00:19:12.697 fused_ordering(380) 00:19:12.697 fused_ordering(381) 00:19:12.697 fused_ordering(382) 00:19:12.697 fused_ordering(383) 00:19:12.697 fused_ordering(384) 00:19:12.697 fused_ordering(385) 00:19:12.697 fused_ordering(386) 00:19:12.697 fused_ordering(387) 00:19:12.697 fused_ordering(388) 00:19:12.697 fused_ordering(389) 00:19:12.697 fused_ordering(390) 00:19:12.697 fused_ordering(391) 00:19:12.697 fused_ordering(392) 00:19:12.697 fused_ordering(393) 00:19:12.697 fused_ordering(394) 00:19:12.697 fused_ordering(395) 00:19:12.697 fused_ordering(396) 00:19:12.697 fused_ordering(397) 00:19:12.697 fused_ordering(398) 00:19:12.697 fused_ordering(399) 00:19:12.697 fused_ordering(400) 00:19:12.697 fused_ordering(401) 00:19:12.697 fused_ordering(402) 00:19:12.697 fused_ordering(403) 00:19:12.697 fused_ordering(404) 00:19:12.697 fused_ordering(405) 00:19:12.697 fused_ordering(406) 00:19:12.697 fused_ordering(407) 00:19:12.697 fused_ordering(408) 00:19:12.697 fused_ordering(409) 00:19:12.697 fused_ordering(410) 00:19:12.957 fused_ordering(411) 00:19:12.957 fused_ordering(412) 00:19:12.957 fused_ordering(413) 00:19:12.957 fused_ordering(414) 00:19:12.957 fused_ordering(415) 00:19:12.957 fused_ordering(416) 00:19:12.957 fused_ordering(417) 00:19:12.957 fused_ordering(418) 00:19:12.957 fused_ordering(419) 00:19:12.957 fused_ordering(420) 00:19:12.957 fused_ordering(421) 00:19:12.957 fused_ordering(422) 00:19:12.957 fused_ordering(423) 00:19:12.957 fused_ordering(424) 00:19:12.957 fused_ordering(425) 00:19:12.957 fused_ordering(426) 00:19:12.957 fused_ordering(427) 00:19:12.957 fused_ordering(428) 00:19:12.957 fused_ordering(429) 00:19:12.957 fused_ordering(430) 00:19:12.957 fused_ordering(431) 00:19:12.957 fused_ordering(432) 00:19:12.957 fused_ordering(433) 00:19:12.957 fused_ordering(434) 00:19:12.957 fused_ordering(435) 00:19:12.957 fused_ordering(436) 00:19:12.957 fused_ordering(437) 00:19:12.957 fused_ordering(438) 00:19:12.957 fused_ordering(439) 00:19:12.957 fused_ordering(440) 00:19:12.957 fused_ordering(441) 00:19:12.957 fused_ordering(442) 00:19:12.957 fused_ordering(443) 00:19:12.957 fused_ordering(444) 00:19:12.957 fused_ordering(445) 00:19:12.957 fused_ordering(446) 00:19:12.957 fused_ordering(447) 00:19:12.957 fused_ordering(448) 00:19:12.957 fused_ordering(449) 00:19:12.957 fused_ordering(450) 00:19:12.957 fused_ordering(451) 00:19:12.957 fused_ordering(452) 00:19:12.957 fused_ordering(453) 00:19:12.957 fused_ordering(454) 00:19:12.957 fused_ordering(455) 00:19:12.957 fused_ordering(456) 00:19:12.957 fused_ordering(457) 00:19:12.958 fused_ordering(458) 00:19:12.958 fused_ordering(459) 00:19:12.958 fused_ordering(460) 00:19:12.958 fused_ordering(461) 00:19:12.958 fused_ordering(462) 00:19:12.958 fused_ordering(463) 00:19:12.958 fused_ordering(464) 00:19:12.958 fused_ordering(465) 00:19:12.958 fused_ordering(466) 00:19:12.958 fused_ordering(467) 00:19:12.958 fused_ordering(468) 00:19:12.958 fused_ordering(469) 00:19:12.958 fused_ordering(470) 00:19:12.958 fused_ordering(471) 00:19:12.958 fused_ordering(472) 00:19:12.958 fused_ordering(473) 00:19:12.958 fused_ordering(474) 00:19:12.958 fused_ordering(475) 00:19:12.958 fused_ordering(476) 00:19:12.958 fused_ordering(477) 00:19:12.958 fused_ordering(478) 00:19:12.958 fused_ordering(479) 00:19:12.958 fused_ordering(480) 00:19:12.958 fused_ordering(481) 00:19:12.958 fused_ordering(482) 00:19:12.958 fused_ordering(483) 00:19:12.958 fused_ordering(484) 00:19:12.958 fused_ordering(485) 00:19:12.958 fused_ordering(486) 00:19:12.958 fused_ordering(487) 00:19:12.958 fused_ordering(488) 00:19:12.958 fused_ordering(489) 00:19:12.958 fused_ordering(490) 00:19:12.958 fused_ordering(491) 00:19:12.958 fused_ordering(492) 00:19:12.958 fused_ordering(493) 00:19:12.958 fused_ordering(494) 00:19:12.958 fused_ordering(495) 00:19:12.958 fused_ordering(496) 00:19:12.958 fused_ordering(497) 00:19:12.958 fused_ordering(498) 00:19:12.958 fused_ordering(499) 00:19:12.958 fused_ordering(500) 00:19:12.958 fused_ordering(501) 00:19:12.958 fused_ordering(502) 00:19:12.958 fused_ordering(503) 00:19:12.958 fused_ordering(504) 00:19:12.958 fused_ordering(505) 00:19:12.958 fused_ordering(506) 00:19:12.958 fused_ordering(507) 00:19:12.958 fused_ordering(508) 00:19:12.958 fused_ordering(509) 00:19:12.958 fused_ordering(510) 00:19:12.958 fused_ordering(511) 00:19:12.958 fused_ordering(512) 00:19:12.958 fused_ordering(513) 00:19:12.958 fused_ordering(514) 00:19:12.958 fused_ordering(515) 00:19:12.958 fused_ordering(516) 00:19:12.958 fused_ordering(517) 00:19:12.958 fused_ordering(518) 00:19:12.958 fused_ordering(519) 00:19:12.958 fused_ordering(520) 00:19:12.958 fused_ordering(521) 00:19:12.958 fused_ordering(522) 00:19:12.958 fused_ordering(523) 00:19:12.958 fused_ordering(524) 00:19:12.958 fused_ordering(525) 00:19:12.958 fused_ordering(526) 00:19:12.958 fused_ordering(527) 00:19:12.958 fused_ordering(528) 00:19:12.958 fused_ordering(529) 00:19:12.958 fused_ordering(530) 00:19:12.958 fused_ordering(531) 00:19:12.958 fused_ordering(532) 00:19:12.958 fused_ordering(533) 00:19:12.958 fused_ordering(534) 00:19:12.958 fused_ordering(535) 00:19:12.958 fused_ordering(536) 00:19:12.958 fused_ordering(537) 00:19:12.958 fused_ordering(538) 00:19:12.958 fused_ordering(539) 00:19:12.958 fused_ordering(540) 00:19:12.958 fused_ordering(541) 00:19:12.958 fused_ordering(542) 00:19:12.958 fused_ordering(543) 00:19:12.958 fused_ordering(544) 00:19:12.958 fused_ordering(545) 00:19:12.958 fused_ordering(546) 00:19:12.958 fused_ordering(547) 00:19:12.958 fused_ordering(548) 00:19:12.958 fused_ordering(549) 00:19:12.958 fused_ordering(550) 00:19:12.958 fused_ordering(551) 00:19:12.958 fused_ordering(552) 00:19:12.958 fused_ordering(553) 00:19:12.958 fused_ordering(554) 00:19:12.958 fused_ordering(555) 00:19:12.958 fused_ordering(556) 00:19:12.958 fused_ordering(557) 00:19:12.958 fused_ordering(558) 00:19:12.958 fused_ordering(559) 00:19:12.958 fused_ordering(560) 00:19:12.958 fused_ordering(561) 00:19:12.958 fused_ordering(562) 00:19:12.958 fused_ordering(563) 00:19:12.958 fused_ordering(564) 00:19:12.958 fused_ordering(565) 00:19:12.958 fused_ordering(566) 00:19:12.958 fused_ordering(567) 00:19:12.958 fused_ordering(568) 00:19:12.958 fused_ordering(569) 00:19:12.958 fused_ordering(570) 00:19:12.958 fused_ordering(571) 00:19:12.958 fused_ordering(572) 00:19:12.958 fused_ordering(573) 00:19:12.958 fused_ordering(574) 00:19:12.958 fused_ordering(575) 00:19:12.958 fused_ordering(576) 00:19:12.958 fused_ordering(577) 00:19:12.958 fused_ordering(578) 00:19:12.958 fused_ordering(579) 00:19:12.958 fused_ordering(580) 00:19:12.958 fused_ordering(581) 00:19:12.958 fused_ordering(582) 00:19:12.958 fused_ordering(583) 00:19:12.958 fused_ordering(584) 00:19:12.958 fused_ordering(585) 00:19:12.958 fused_ordering(586) 00:19:12.958 fused_ordering(587) 00:19:12.958 fused_ordering(588) 00:19:12.958 fused_ordering(589) 00:19:12.958 fused_ordering(590) 00:19:12.958 fused_ordering(591) 00:19:12.958 fused_ordering(592) 00:19:12.958 fused_ordering(593) 00:19:12.958 fused_ordering(594) 00:19:12.958 fused_ordering(595) 00:19:12.958 fused_ordering(596) 00:19:12.958 fused_ordering(597) 00:19:12.958 fused_ordering(598) 00:19:12.958 fused_ordering(599) 00:19:12.958 fused_ordering(600) 00:19:12.958 fused_ordering(601) 00:19:12.958 fused_ordering(602) 00:19:12.958 fused_ordering(603) 00:19:12.958 fused_ordering(604) 00:19:12.958 fused_ordering(605) 00:19:12.958 fused_ordering(606) 00:19:12.958 fused_ordering(607) 00:19:12.958 fused_ordering(608) 00:19:12.958 fused_ordering(609) 00:19:12.958 fused_ordering(610) 00:19:12.958 fused_ordering(611) 00:19:12.958 fused_ordering(612) 00:19:12.958 fused_ordering(613) 00:19:12.958 fused_ordering(614) 00:19:12.958 fused_ordering(615) 00:19:13.531 fused_ordering(616) 00:19:13.531 fused_ordering(617) 00:19:13.531 fused_ordering(618) 00:19:13.531 fused_ordering(619) 00:19:13.531 fused_ordering(620) 00:19:13.531 fused_ordering(621) 00:19:13.531 fused_ordering(622) 00:19:13.531 fused_ordering(623) 00:19:13.531 fused_ordering(624) 00:19:13.531 fused_ordering(625) 00:19:13.531 fused_ordering(626) 00:19:13.531 fused_ordering(627) 00:19:13.531 fused_ordering(628) 00:19:13.531 fused_ordering(629) 00:19:13.531 fused_ordering(630) 00:19:13.531 fused_ordering(631) 00:19:13.531 fused_ordering(632) 00:19:13.531 fused_ordering(633) 00:19:13.531 fused_ordering(634) 00:19:13.531 fused_ordering(635) 00:19:13.531 fused_ordering(636) 00:19:13.531 fused_ordering(637) 00:19:13.531 fused_ordering(638) 00:19:13.531 fused_ordering(639) 00:19:13.531 fused_ordering(640) 00:19:13.531 fused_ordering(641) 00:19:13.531 fused_ordering(642) 00:19:13.531 fused_ordering(643) 00:19:13.531 fused_ordering(644) 00:19:13.531 fused_ordering(645) 00:19:13.531 fused_ordering(646) 00:19:13.531 fused_ordering(647) 00:19:13.531 fused_ordering(648) 00:19:13.531 fused_ordering(649) 00:19:13.531 fused_ordering(650) 00:19:13.531 fused_ordering(651) 00:19:13.531 fused_ordering(652) 00:19:13.531 fused_ordering(653) 00:19:13.531 fused_ordering(654) 00:19:13.531 fused_ordering(655) 00:19:13.531 fused_ordering(656) 00:19:13.531 fused_ordering(657) 00:19:13.531 fused_ordering(658) 00:19:13.531 fused_ordering(659) 00:19:13.531 fused_ordering(660) 00:19:13.531 fused_ordering(661) 00:19:13.531 fused_ordering(662) 00:19:13.531 fused_ordering(663) 00:19:13.531 fused_ordering(664) 00:19:13.531 fused_ordering(665) 00:19:13.531 fused_ordering(666) 00:19:13.531 fused_ordering(667) 00:19:13.531 fused_ordering(668) 00:19:13.531 fused_ordering(669) 00:19:13.531 fused_ordering(670) 00:19:13.531 fused_ordering(671) 00:19:13.531 fused_ordering(672) 00:19:13.531 fused_ordering(673) 00:19:13.531 fused_ordering(674) 00:19:13.531 fused_ordering(675) 00:19:13.531 fused_ordering(676) 00:19:13.531 fused_ordering(677) 00:19:13.531 fused_ordering(678) 00:19:13.531 fused_ordering(679) 00:19:13.531 fused_ordering(680) 00:19:13.531 fused_ordering(681) 00:19:13.531 fused_ordering(682) 00:19:13.531 fused_ordering(683) 00:19:13.531 fused_ordering(684) 00:19:13.531 fused_ordering(685) 00:19:13.531 fused_ordering(686) 00:19:13.531 fused_ordering(687) 00:19:13.531 fused_ordering(688) 00:19:13.531 fused_ordering(689) 00:19:13.531 fused_ordering(690) 00:19:13.531 fused_ordering(691) 00:19:13.531 fused_ordering(692) 00:19:13.531 fused_ordering(693) 00:19:13.531 fused_ordering(694) 00:19:13.531 fused_ordering(695) 00:19:13.531 fused_ordering(696) 00:19:13.531 fused_ordering(697) 00:19:13.531 fused_ordering(698) 00:19:13.531 fused_ordering(699) 00:19:13.531 fused_ordering(700) 00:19:13.531 fused_ordering(701) 00:19:13.531 fused_ordering(702) 00:19:13.531 fused_ordering(703) 00:19:13.531 fused_ordering(704) 00:19:13.531 fused_ordering(705) 00:19:13.531 fused_ordering(706) 00:19:13.531 fused_ordering(707) 00:19:13.531 fused_ordering(708) 00:19:13.531 fused_ordering(709) 00:19:13.531 fused_ordering(710) 00:19:13.531 fused_ordering(711) 00:19:13.531 fused_ordering(712) 00:19:13.531 fused_ordering(713) 00:19:13.531 fused_ordering(714) 00:19:13.531 fused_ordering(715) 00:19:13.531 fused_ordering(716) 00:19:13.531 fused_ordering(717) 00:19:13.531 fused_ordering(718) 00:19:13.531 fused_ordering(719) 00:19:13.531 fused_ordering(720) 00:19:13.531 fused_ordering(721) 00:19:13.531 fused_ordering(722) 00:19:13.531 fused_ordering(723) 00:19:13.531 fused_ordering(724) 00:19:13.531 fused_ordering(725) 00:19:13.531 fused_ordering(726) 00:19:13.531 fused_ordering(727) 00:19:13.531 fused_ordering(728) 00:19:13.531 fused_ordering(729) 00:19:13.531 fused_ordering(730) 00:19:13.531 fused_ordering(731) 00:19:13.531 fused_ordering(732) 00:19:13.531 fused_ordering(733) 00:19:13.531 fused_ordering(734) 00:19:13.531 fused_ordering(735) 00:19:13.531 fused_ordering(736) 00:19:13.531 fused_ordering(737) 00:19:13.531 fused_ordering(738) 00:19:13.531 fused_ordering(739) 00:19:13.531 fused_ordering(740) 00:19:13.531 fused_ordering(741) 00:19:13.531 fused_ordering(742) 00:19:13.531 fused_ordering(743) 00:19:13.531 fused_ordering(744) 00:19:13.531 fused_ordering(745) 00:19:13.531 fused_ordering(746) 00:19:13.531 fused_ordering(747) 00:19:13.531 fused_ordering(748) 00:19:13.531 fused_ordering(749) 00:19:13.531 fused_ordering(750) 00:19:13.531 fused_ordering(751) 00:19:13.531 fused_ordering(752) 00:19:13.531 fused_ordering(753) 00:19:13.531 fused_ordering(754) 00:19:13.531 fused_ordering(755) 00:19:13.531 fused_ordering(756) 00:19:13.531 fused_ordering(757) 00:19:13.531 fused_ordering(758) 00:19:13.531 fused_ordering(759) 00:19:13.531 fused_ordering(760) 00:19:13.531 fused_ordering(761) 00:19:13.531 fused_ordering(762) 00:19:13.531 fused_ordering(763) 00:19:13.531 fused_ordering(764) 00:19:13.531 fused_ordering(765) 00:19:13.531 fused_ordering(766) 00:19:13.531 fused_ordering(767) 00:19:13.531 fused_ordering(768) 00:19:13.531 fused_ordering(769) 00:19:13.531 fused_ordering(770) 00:19:13.531 fused_ordering(771) 00:19:13.531 fused_ordering(772) 00:19:13.531 fused_ordering(773) 00:19:13.531 fused_ordering(774) 00:19:13.531 fused_ordering(775) 00:19:13.531 fused_ordering(776) 00:19:13.531 fused_ordering(777) 00:19:13.531 fused_ordering(778) 00:19:13.531 fused_ordering(779) 00:19:13.531 fused_ordering(780) 00:19:13.531 fused_ordering(781) 00:19:13.531 fused_ordering(782) 00:19:13.531 fused_ordering(783) 00:19:13.531 fused_ordering(784) 00:19:13.531 fused_ordering(785) 00:19:13.531 fused_ordering(786) 00:19:13.531 fused_ordering(787) 00:19:13.531 fused_ordering(788) 00:19:13.531 fused_ordering(789) 00:19:13.531 fused_ordering(790) 00:19:13.531 fused_ordering(791) 00:19:13.531 fused_ordering(792) 00:19:13.531 fused_ordering(793) 00:19:13.531 fused_ordering(794) 00:19:13.531 fused_ordering(795) 00:19:13.531 fused_ordering(796) 00:19:13.531 fused_ordering(797) 00:19:13.531 fused_ordering(798) 00:19:13.531 fused_ordering(799) 00:19:13.531 fused_ordering(800) 00:19:13.531 fused_ordering(801) 00:19:13.531 fused_ordering(802) 00:19:13.531 fused_ordering(803) 00:19:13.531 fused_ordering(804) 00:19:13.531 fused_ordering(805) 00:19:13.531 fused_ordering(806) 00:19:13.531 fused_ordering(807) 00:19:13.531 fused_ordering(808) 00:19:13.531 fused_ordering(809) 00:19:13.531 fused_ordering(810) 00:19:13.531 fused_ordering(811) 00:19:13.531 fused_ordering(812) 00:19:13.531 fused_ordering(813) 00:19:13.531 fused_ordering(814) 00:19:13.531 fused_ordering(815) 00:19:13.531 fused_ordering(816) 00:19:13.531 fused_ordering(817) 00:19:13.531 fused_ordering(818) 00:19:13.531 fused_ordering(819) 00:19:13.531 fused_ordering(820) 00:19:14.103 fused_ordering(821) 00:19:14.103 fused_ordering(822) 00:19:14.103 fused_ordering(823) 00:19:14.103 fused_ordering(824) 00:19:14.103 fused_ordering(825) 00:19:14.103 fused_ordering(826) 00:19:14.103 fused_ordering(827) 00:19:14.103 fused_ordering(828) 00:19:14.103 fused_ordering(829) 00:19:14.103 fused_ordering(830) 00:19:14.103 fused_ordering(831) 00:19:14.103 fused_ordering(832) 00:19:14.103 fused_ordering(833) 00:19:14.103 fused_ordering(834) 00:19:14.103 fused_ordering(835) 00:19:14.103 fused_ordering(836) 00:19:14.103 fused_ordering(837) 00:19:14.103 fused_ordering(838) 00:19:14.103 fused_ordering(839) 00:19:14.103 fused_ordering(840) 00:19:14.103 fused_ordering(841) 00:19:14.103 fused_ordering(842) 00:19:14.103 fused_ordering(843) 00:19:14.103 fused_ordering(844) 00:19:14.103 fused_ordering(845) 00:19:14.103 fused_ordering(846) 00:19:14.103 fused_ordering(847) 00:19:14.103 fused_ordering(848) 00:19:14.103 fused_ordering(849) 00:19:14.103 fused_ordering(850) 00:19:14.103 fused_ordering(851) 00:19:14.103 fused_ordering(852) 00:19:14.103 fused_ordering(853) 00:19:14.103 fused_ordering(854) 00:19:14.103 fused_ordering(855) 00:19:14.103 fused_ordering(856) 00:19:14.103 fused_ordering(857) 00:19:14.103 fused_ordering(858) 00:19:14.103 fused_ordering(859) 00:19:14.103 fused_ordering(860) 00:19:14.103 fused_ordering(861) 00:19:14.103 fused_ordering(862) 00:19:14.103 fused_ordering(863) 00:19:14.103 fused_ordering(864) 00:19:14.103 fused_ordering(865) 00:19:14.103 fused_ordering(866) 00:19:14.103 fused_ordering(867) 00:19:14.103 fused_ordering(868) 00:19:14.103 fused_ordering(869) 00:19:14.103 fused_ordering(870) 00:19:14.103 fused_ordering(871) 00:19:14.103 fused_ordering(872) 00:19:14.103 fused_ordering(873) 00:19:14.103 fused_ordering(874) 00:19:14.103 fused_ordering(875) 00:19:14.103 fused_ordering(876) 00:19:14.103 fused_ordering(877) 00:19:14.103 fused_ordering(878) 00:19:14.103 fused_ordering(879) 00:19:14.103 fused_ordering(880) 00:19:14.103 fused_ordering(881) 00:19:14.103 fused_ordering(882) 00:19:14.103 fused_ordering(883) 00:19:14.103 fused_ordering(884) 00:19:14.103 fused_ordering(885) 00:19:14.103 fused_ordering(886) 00:19:14.103 fused_ordering(887) 00:19:14.103 fused_ordering(888) 00:19:14.103 fused_ordering(889) 00:19:14.103 fused_ordering(890) 00:19:14.103 fused_ordering(891) 00:19:14.103 fused_ordering(892) 00:19:14.103 fused_ordering(893) 00:19:14.103 fused_ordering(894) 00:19:14.103 fused_ordering(895) 00:19:14.103 fused_ordering(896) 00:19:14.103 fused_ordering(897) 00:19:14.103 fused_ordering(898) 00:19:14.103 fused_ordering(899) 00:19:14.103 fused_ordering(900) 00:19:14.103 fused_ordering(901) 00:19:14.103 fused_ordering(902) 00:19:14.103 fused_ordering(903) 00:19:14.103 fused_ordering(904) 00:19:14.103 fused_ordering(905) 00:19:14.103 fused_ordering(906) 00:19:14.103 fused_ordering(907) 00:19:14.103 fused_ordering(908) 00:19:14.103 fused_ordering(909) 00:19:14.103 fused_ordering(910) 00:19:14.103 fused_ordering(911) 00:19:14.103 fused_ordering(912) 00:19:14.103 fused_ordering(913) 00:19:14.103 fused_ordering(914) 00:19:14.103 fused_ordering(915) 00:19:14.103 fused_ordering(916) 00:19:14.103 fused_ordering(917) 00:19:14.103 fused_ordering(918) 00:19:14.103 fused_ordering(919) 00:19:14.103 fused_ordering(920) 00:19:14.103 fused_ordering(921) 00:19:14.103 fused_ordering(922) 00:19:14.104 fused_ordering(923) 00:19:14.104 fused_ordering(924) 00:19:14.104 fused_ordering(925) 00:19:14.104 fused_ordering(926) 00:19:14.104 fused_ordering(927) 00:19:14.104 fused_ordering(928) 00:19:14.104 fused_ordering(929) 00:19:14.104 fused_ordering(930) 00:19:14.104 fused_ordering(931) 00:19:14.104 fused_ordering(932) 00:19:14.104 fused_ordering(933) 00:19:14.104 fused_ordering(934) 00:19:14.104 fused_ordering(935) 00:19:14.104 fused_ordering(936) 00:19:14.104 fused_ordering(937) 00:19:14.104 fused_ordering(938) 00:19:14.104 fused_ordering(939) 00:19:14.104 fused_ordering(940) 00:19:14.104 fused_ordering(941) 00:19:14.104 fused_ordering(942) 00:19:14.104 fused_ordering(943) 00:19:14.104 fused_ordering(944) 00:19:14.104 fused_ordering(945) 00:19:14.104 fused_ordering(946) 00:19:14.104 fused_ordering(947) 00:19:14.104 fused_ordering(948) 00:19:14.104 fused_ordering(949) 00:19:14.104 fused_ordering(950) 00:19:14.104 fused_ordering(951) 00:19:14.104 fused_ordering(952) 00:19:14.104 fused_ordering(953) 00:19:14.104 fused_ordering(954) 00:19:14.104 fused_ordering(955) 00:19:14.104 fused_ordering(956) 00:19:14.104 fused_ordering(957) 00:19:14.104 fused_ordering(958) 00:19:14.104 fused_ordering(959) 00:19:14.104 fused_ordering(960) 00:19:14.104 fused_ordering(961) 00:19:14.104 fused_ordering(962) 00:19:14.104 fused_ordering(963) 00:19:14.104 fused_ordering(964) 00:19:14.104 fused_ordering(965) 00:19:14.104 fused_ordering(966) 00:19:14.104 fused_ordering(967) 00:19:14.104 fused_ordering(968) 00:19:14.104 fused_ordering(969) 00:19:14.104 fused_ordering(970) 00:19:14.104 fused_ordering(971) 00:19:14.104 fused_ordering(972) 00:19:14.104 fused_ordering(973) 00:19:14.104 fused_ordering(974) 00:19:14.104 fused_ordering(975) 00:19:14.104 fused_ordering(976) 00:19:14.104 fused_ordering(977) 00:19:14.104 fused_ordering(978) 00:19:14.104 fused_ordering(979) 00:19:14.104 fused_ordering(980) 00:19:14.104 fused_ordering(981) 00:19:14.104 fused_ordering(982) 00:19:14.104 fused_ordering(983) 00:19:14.104 fused_ordering(984) 00:19:14.104 fused_ordering(985) 00:19:14.104 fused_ordering(986) 00:19:14.104 fused_ordering(987) 00:19:14.104 fused_ordering(988) 00:19:14.104 fused_ordering(989) 00:19:14.104 fused_ordering(990) 00:19:14.104 fused_ordering(991) 00:19:14.104 fused_ordering(992) 00:19:14.104 fused_ordering(993) 00:19:14.104 fused_ordering(994) 00:19:14.104 fused_ordering(995) 00:19:14.104 fused_ordering(996) 00:19:14.104 fused_ordering(997) 00:19:14.104 fused_ordering(998) 00:19:14.104 fused_ordering(999) 00:19:14.104 fused_ordering(1000) 00:19:14.104 fused_ordering(1001) 00:19:14.104 fused_ordering(1002) 00:19:14.104 fused_ordering(1003) 00:19:14.104 fused_ordering(1004) 00:19:14.104 fused_ordering(1005) 00:19:14.104 fused_ordering(1006) 00:19:14.104 fused_ordering(1007) 00:19:14.104 fused_ordering(1008) 00:19:14.104 fused_ordering(1009) 00:19:14.104 fused_ordering(1010) 00:19:14.104 fused_ordering(1011) 00:19:14.104 fused_ordering(1012) 00:19:14.104 fused_ordering(1013) 00:19:14.104 fused_ordering(1014) 00:19:14.104 fused_ordering(1015) 00:19:14.104 fused_ordering(1016) 00:19:14.104 fused_ordering(1017) 00:19:14.104 fused_ordering(1018) 00:19:14.104 fused_ordering(1019) 00:19:14.104 fused_ordering(1020) 00:19:14.104 fused_ordering(1021) 00:19:14.104 fused_ordering(1022) 00:19:14.104 fused_ordering(1023) 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@99 -- # sync 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # set +e 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:14.104 rmmod nvme_tcp 00:19:14.104 rmmod nvme_fabrics 00:19:14.104 rmmod nvme_keyring 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # set -e 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # return 0 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # '[' -n 1937975 ']' 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@337 -- # killprocess 1937975 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1937975 ']' 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1937975 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1937975 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1937975' 00:19:14.104 killing process with pid 1937975 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1937975 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1937975 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # nvmf_fini 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@254 -- # local dev 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:14.104 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@121 -- # return 0 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # _dev=0 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # dev_map=() 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@274 -- # iptr 00:19:16.647 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # iptables-save 00:19:16.648 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:16.648 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # iptables-restore 00:19:16.648 00:19:16.648 real 0m14.431s 00:19:16.648 user 0m7.231s 00:19:16.648 sys 0m7.875s 00:19:16.648 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.648 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:16.648 ************************************ 00:19:16.648 END TEST nvmf_fused_ordering 00:19:16.648 ************************************ 00:19:16.648 08:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:19:16.648 08:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:16.648 08:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.648 08:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:16.648 ************************************ 00:19:16.648 START TEST nvmf_ns_masking 00:19:16.648 ************************************ 00:19:16.648 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:19:16.648 * Looking for test storage... 00:19:16.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:16.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.648 --rc genhtml_branch_coverage=1 00:19:16.648 --rc genhtml_function_coverage=1 00:19:16.648 --rc genhtml_legend=1 00:19:16.648 --rc geninfo_all_blocks=1 00:19:16.648 --rc geninfo_unexecuted_blocks=1 00:19:16.648 00:19:16.648 ' 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:16.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.648 --rc genhtml_branch_coverage=1 00:19:16.648 --rc genhtml_function_coverage=1 00:19:16.648 --rc genhtml_legend=1 00:19:16.648 --rc geninfo_all_blocks=1 00:19:16.648 --rc geninfo_unexecuted_blocks=1 00:19:16.648 00:19:16.648 ' 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:16.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.648 --rc genhtml_branch_coverage=1 00:19:16.648 --rc genhtml_function_coverage=1 00:19:16.648 --rc genhtml_legend=1 00:19:16.648 --rc geninfo_all_blocks=1 00:19:16.648 --rc geninfo_unexecuted_blocks=1 00:19:16.648 00:19:16.648 ' 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:16.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.648 --rc genhtml_branch_coverage=1 00:19:16.648 --rc genhtml_function_coverage=1 00:19:16.648 --rc genhtml_legend=1 00:19:16.648 --rc geninfo_all_blocks=1 00:19:16.648 --rc geninfo_unexecuted_blocks=1 00:19:16.648 00:19:16.648 ' 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:19:16.648 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@50 -- # : 0 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:16.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=84a31c4f-87b0-410b-a61a-026102e9f12a 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=1bf427d3-d244-48d1-a81d-7ceffd8005f7 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5c9456ed-8c88-405f-b976-8ee7e068a196 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # remove_target_ns 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # xtrace_disable 00:19:16.649 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:24.786 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.786 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # pci_devs=() 00:19:24.786 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:24.786 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:24.786 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:24.786 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:24.786 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:24.786 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # net_devs=() 00:19:24.786 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:24.786 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # e810=() 00:19:24.786 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # local -ga e810 00:19:24.786 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # x722=() 00:19:24.786 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # local -ga x722 00:19:24.786 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # mlx=() 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # local -ga mlx 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:24.787 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:24.787 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:24.787 Found net devices under 0000:31:00.0: cvl_0_0 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:24.787 Found net devices under 0000:31:00.1: cvl_0_1 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # is_hw=yes 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@247 -- # create_target_ns 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@28 -- # local -g _dev 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # ips=() 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:24.787 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772161 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:19:24.788 10.0.0.1 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772162 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:19:24.788 10.0.0.2 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:24.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.534 ms 00:19:24.788 00:19:24.788 --- 10.0.0.1 ping statistics --- 00:19:24.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.788 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target0 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:24.788 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:25.050 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:25.050 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:25.050 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:25.050 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:25.050 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:25.050 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:25.050 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:25.050 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:25.050 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:25.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:19:25.050 00:19:25.050 --- 10.0.0.2 ping statistics --- 00:19:25.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.050 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:19:25.050 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:25.050 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:25.050 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.050 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # return 0 00:19:25.050 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:25.050 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # return 1 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev= 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@160 -- # return 0 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target0 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target1 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # return 1 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev= 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@160 -- # return 0 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # nvmfpid=1943369 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # waitforlisten 1943369 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1943369 ']' 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.051 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:25.051 [2024-11-20 08:16:29.692561] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:19:25.051 [2024-11-20 08:16:29.692634] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.312 [2024-11-20 08:16:29.782750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.312 [2024-11-20 08:16:29.823270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.312 [2024-11-20 08:16:29.823306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.312 [2024-11-20 08:16:29.823315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.312 [2024-11-20 08:16:29.823322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.312 [2024-11-20 08:16:29.823327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.312 [2024-11-20 08:16:29.823902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.883 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.883 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:25.883 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:25.883 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.883 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:25.883 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.883 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:26.144 [2024-11-20 08:16:30.681183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.144 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:26.144 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:26.144 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:26.144 Malloc1 00:19:26.405 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:26.405 Malloc2 00:19:26.405 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:26.666 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:26.927 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.928 [2024-11-20 08:16:31.540568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.928 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:26.928 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5c9456ed-8c88-405f-b976-8ee7e068a196 -a 10.0.0.2 -s 4420 -i 4 00:19:27.187 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:27.187 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:27.187 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:27.187 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:27.188 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:29.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:29.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:29.102 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:29.102 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:29.102 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:29.102 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:29.102 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:29.102 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:29.102 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:29.102 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:29.102 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:29.102 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:29.102 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:29.102 [ 0]:0x1 00:19:29.102 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:29.102 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:29.363 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=db24206392f64495b13446a2a59f9493 00:19:29.363 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ db24206392f64495b13446a2a59f9493 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:29.363 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:29.363 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:29.363 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:29.363 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:29.363 [ 0]:0x1 00:19:29.363 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:29.363 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:29.363 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=db24206392f64495b13446a2a59f9493 00:19:29.363 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ db24206392f64495b13446a2a59f9493 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:29.363 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:29.624 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:29.624 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:29.624 [ 1]:0x2 00:19:29.624 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:29.624 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:29.624 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2759402afc2475ab2363ff5a301a978 00:19:29.624 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2759402afc2475ab2363ff5a301a978 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:29.624 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:29.624 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:29.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:29.886 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:30.148 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:30.148 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:30.148 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5c9456ed-8c88-405f-b976-8ee7e068a196 -a 10.0.0.2 -s 4420 -i 4 00:19:30.408 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:30.408 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:30.408 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:30.408 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:19:30.408 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:19:30.408 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:32.955 [ 0]:0x2 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2759402afc2475ab2363ff5a301a978 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2759402afc2475ab2363ff5a301a978 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:32.955 [ 0]:0x1 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=db24206392f64495b13446a2a59f9493 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ db24206392f64495b13446a2a59f9493 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:32.955 [ 1]:0x2 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2759402afc2475ab2363ff5a301a978 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2759402afc2475ab2363ff5a301a978 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:32.955 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:33.217 [ 0]:0x2 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2759402afc2475ab2363ff5a301a978 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2759402afc2475ab2363ff5a301a978 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:33.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:33.217 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:33.478 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:33.478 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5c9456ed-8c88-405f-b976-8ee7e068a196 -a 10.0.0.2 -s 4420 -i 4 00:19:33.738 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:33.738 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:33.738 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:33.738 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:33.738 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:33.738 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:35.651 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:35.651 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:35.651 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:35.651 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:35.651 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:35.651 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:35.651 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:35.651 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:35.912 [ 0]:0x1 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=db24206392f64495b13446a2a59f9493 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ db24206392f64495b13446a2a59f9493 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:35.912 [ 1]:0x2 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2759402afc2475ab2363ff5a301a978 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2759402afc2475ab2363ff5a301a978 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:35.912 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:36.187 [ 0]:0x2 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:36.187 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:36.448 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2759402afc2475ab2363ff5a301a978 00:19:36.448 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2759402afc2475ab2363ff5a301a978 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:36.448 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:36.448 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:36.448 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:36.448 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:36.448 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.448 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:36.448 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.448 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:36.448 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.448 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:36.448 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:36.448 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:36.448 [2024-11-20 08:16:41.079921] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:36.448 request: 00:19:36.448 { 00:19:36.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.448 "nsid": 2, 00:19:36.448 "host": "nqn.2016-06.io.spdk:host1", 00:19:36.448 "method": "nvmf_ns_remove_host", 00:19:36.448 "req_id": 1 00:19:36.448 } 00:19:36.448 Got JSON-RPC error response 00:19:36.448 response: 00:19:36.448 { 00:19:36.448 "code": -32602, 00:19:36.448 "message": "Invalid parameters" 00:19:36.448 } 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:36.448 [ 0]:0x2 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:36.448 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:36.709 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2759402afc2475ab2363ff5a301a978 00:19:36.709 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2759402afc2475ab2363ff5a301a978 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:36.709 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:36.709 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:36.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:36.710 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1945837 00:19:36.710 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.710 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:36.710 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1945837 /var/tmp/host.sock 00:19:36.710 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1945837 ']' 00:19:36.710 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:36.710 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.710 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:36.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:36.710 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.710 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:36.710 [2024-11-20 08:16:41.329769] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:19:36.710 [2024-11-20 08:16:41.329822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1945837 ] 00:19:36.710 [2024-11-20 08:16:41.425214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.970 [2024-11-20 08:16:41.461039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.541 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.541 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:37.541 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:37.801 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:37.801 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 84a31c4f-87b0-410b-a61a-026102e9f12a 00:19:37.801 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:19:37.801 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 84A31C4F87B0410BA61A026102E9F12A -i 00:19:38.062 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 1bf427d3-d244-48d1-a81d-7ceffd8005f7 00:19:38.062 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:19:38.062 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 1BF427D3D24448D1A81D7CEFFD8005F7 -i 00:19:38.062 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:38.322 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:38.582 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:38.582 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:38.842 nvme0n1 00:19:38.842 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:38.842 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:39.104 nvme1n2 00:19:39.104 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:39.104 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:39.104 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:39.104 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:39.104 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:39.364 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:39.364 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:39.364 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:39.364 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:39.364 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 84a31c4f-87b0-410b-a61a-026102e9f12a == \8\4\a\3\1\c\4\f\-\8\7\b\0\-\4\1\0\b\-\a\6\1\a\-\0\2\6\1\0\2\e\9\f\1\2\a ]] 00:19:39.364 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:39.365 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:39.365 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:39.624 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 1bf427d3-d244-48d1-a81d-7ceffd8005f7 == \1\b\f\4\2\7\d\3\-\d\2\4\4\-\4\8\d\1\-\a\8\1\d\-\7\c\e\f\f\d\8\0\0\5\f\7 ]] 00:19:39.624 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:39.883 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:39.883 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 84a31c4f-87b0-410b-a61a-026102e9f12a 00:19:39.883 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:19:39.883 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 84A31C4F87B0410BA61A026102E9F12A 00:19:39.883 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:39.883 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 84A31C4F87B0410BA61A026102E9F12A 00:19:39.883 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:39.883 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.883 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:39.883 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.883 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:39.883 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.883 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:39.883 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:39.883 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 84A31C4F87B0410BA61A026102E9F12A 00:19:40.144 [2024-11-20 08:16:44.741912] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:40.144 [2024-11-20 08:16:44.741949] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:40.144 [2024-11-20 08:16:44.741958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.144 request: 00:19:40.144 { 00:19:40.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.144 "namespace": { 00:19:40.144 "bdev_name": "invalid", 00:19:40.144 "nsid": 1, 00:19:40.144 "nguid": "84A31C4F87B0410BA61A026102E9F12A", 00:19:40.144 "no_auto_visible": false 00:19:40.144 }, 00:19:40.144 "method": "nvmf_subsystem_add_ns", 00:19:40.144 "req_id": 1 00:19:40.144 } 00:19:40.144 Got JSON-RPC error response 00:19:40.144 response: 00:19:40.144 { 00:19:40.144 "code": -32602, 00:19:40.144 "message": "Invalid parameters" 00:19:40.144 } 00:19:40.144 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:40.144 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:40.144 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:40.144 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:40.144 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 84a31c4f-87b0-410b-a61a-026102e9f12a 00:19:40.144 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:19:40.144 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 84A31C4F87B0410BA61A026102E9F12A -i 00:19:40.404 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:42.317 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:42.317 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:42.317 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:42.577 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:42.577 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1945837 00:19:42.577 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1945837 ']' 00:19:42.577 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1945837 00:19:42.577 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:42.577 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.577 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1945837 00:19:42.577 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:42.577 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:42.577 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1945837' 00:19:42.577 killing process with pid 1945837 00:19:42.577 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1945837 00:19:42.577 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1945837 00:19:42.837 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:42.837 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:42.837 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:42.837 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:42.837 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@99 -- # sync 00:19:42.837 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:42.837 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # set +e 00:19:42.837 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:42.837 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:43.097 rmmod nvme_tcp 00:19:43.097 rmmod nvme_fabrics 00:19:43.097 rmmod nvme_keyring 00:19:43.098 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:43.098 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # set -e 00:19:43.098 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # return 0 00:19:43.098 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # '[' -n 1943369 ']' 00:19:43.098 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@337 -- # killprocess 1943369 00:19:43.098 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1943369 ']' 00:19:43.098 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1943369 00:19:43.098 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:43.098 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.098 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1943369 00:19:43.098 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:43.098 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:43.098 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1943369' 00:19:43.098 killing process with pid 1943369 00:19:43.098 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1943369 00:19:43.098 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1943369 00:19:43.359 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:43.359 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # nvmf_fini 00:19:43.359 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@254 -- # local dev 00:19:43.359 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:43.359 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:43.359 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:43.359 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@121 -- # return 0 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # _dev=0 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # dev_map=() 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@274 -- # iptr 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # iptables-save 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:45.272 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # iptables-restore 00:19:45.272 00:19:45.272 real 0m28.988s 00:19:45.272 user 0m31.843s 00:19:45.272 sys 0m8.715s 00:19:45.273 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.273 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:45.273 ************************************ 00:19:45.273 END TEST nvmf_ns_masking 00:19:45.273 ************************************ 00:19:45.273 08:16:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:45.273 08:16:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:45.273 08:16:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:45.273 08:16:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.273 08:16:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:45.535 ************************************ 00:19:45.535 START TEST nvmf_nvme_cli 00:19:45.535 ************************************ 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:45.535 * Looking for test storage... 00:19:45.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:45.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.535 --rc genhtml_branch_coverage=1 00:19:45.535 --rc genhtml_function_coverage=1 00:19:45.535 --rc genhtml_legend=1 00:19:45.535 --rc geninfo_all_blocks=1 00:19:45.535 --rc geninfo_unexecuted_blocks=1 00:19:45.535 00:19:45.535 ' 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:45.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.535 --rc genhtml_branch_coverage=1 00:19:45.535 --rc genhtml_function_coverage=1 00:19:45.535 --rc genhtml_legend=1 00:19:45.535 --rc geninfo_all_blocks=1 00:19:45.535 --rc geninfo_unexecuted_blocks=1 00:19:45.535 00:19:45.535 ' 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:45.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.535 --rc genhtml_branch_coverage=1 00:19:45.535 --rc genhtml_function_coverage=1 00:19:45.535 --rc genhtml_legend=1 00:19:45.535 --rc geninfo_all_blocks=1 00:19:45.535 --rc geninfo_unexecuted_blocks=1 00:19:45.535 00:19:45.535 ' 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:45.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.535 --rc genhtml_branch_coverage=1 00:19:45.535 --rc genhtml_function_coverage=1 00:19:45.535 --rc genhtml_legend=1 00:19:45.535 --rc geninfo_all_blocks=1 00:19:45.535 --rc geninfo_unexecuted_blocks=1 00:19:45.535 00:19:45.535 ' 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.535 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.536 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.536 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.536 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.536 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:45.536 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.536 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:45.536 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:45.536 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:45.536 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:45.797 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@50 -- # : 0 00:19:45.797 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:45.797 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:45.797 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:45.797 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.797 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.797 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:45.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:45.797 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:45.797 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:45.797 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:45.797 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:45.797 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:45.797 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:45.798 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:45.798 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:45.798 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.798 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:45.798 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:45.798 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # remove_target_ns 00:19:45.798 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:45.798 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:45.798 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:45.798 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:45.798 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:45.798 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # xtrace_disable 00:19:45.798 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # pci_devs=() 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # net_devs=() 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # e810=() 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # local -ga e810 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # x722=() 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # local -ga x722 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # mlx=() 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # local -ga mlx 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:53.943 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:53.943 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:53.943 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:53.944 Found net devices under 0000:31:00.0: cvl_0_0 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:53.944 Found net devices under 0000:31:00.1: cvl_0_1 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # is_hw=yes 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@247 -- # create_target_ns 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@28 -- # local -g _dev 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # ips=() 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772161 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:19:53.944 10.0.0.1 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772162 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:19:53.944 10.0.0.2 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:19:53.944 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:53.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.615 ms 00:19:53.945 00:19:53.945 --- 10.0.0.1 ping statistics --- 00:19:53.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.945 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=target0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:53.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:19:53.945 00:19:53.945 --- 10.0.0.2 ping statistics --- 00:19:53.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.945 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # return 0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # return 1 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev= 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@160 -- # return 0 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:53.945 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=target0 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=target1 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # return 1 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev= 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@160 -- # return 0 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:53.946 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:54.207 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:54.207 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:54.207 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:54.207 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:54.207 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # nvmfpid=1951829 00:19:54.207 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # waitforlisten 1951829 00:19:54.207 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:54.207 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1951829 ']' 00:19:54.207 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.207 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.207 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.207 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.207 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:54.207 [2024-11-20 08:16:58.751193] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:19:54.207 [2024-11-20 08:16:58.751263] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.207 [2024-11-20 08:16:58.844579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:54.207 [2024-11-20 08:16:58.887003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.207 [2024-11-20 08:16:58.887042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.207 [2024-11-20 08:16:58.887050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.207 [2024-11-20 08:16:58.887057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.207 [2024-11-20 08:16:58.887062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.207 [2024-11-20 08:16:58.888883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.207 [2024-11-20 08:16:58.888934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.207 [2024-11-20 08:16:58.889089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:54.207 [2024-11-20 08:16:58.889089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:55.150 [2024-11-20 08:16:59.599702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:55.150 Malloc0 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:55.150 Malloc1 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:55.150 [2024-11-20 08:16:59.697666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.150 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:19:55.410 00:19:55.410 Discovery Log Number of Records 2, Generation counter 2 00:19:55.410 =====Discovery Log Entry 0====== 00:19:55.410 trtype: tcp 00:19:55.410 adrfam: ipv4 00:19:55.410 subtype: current discovery subsystem 00:19:55.410 treq: not required 00:19:55.410 portid: 0 00:19:55.410 trsvcid: 4420 00:19:55.410 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:55.410 traddr: 10.0.0.2 00:19:55.410 eflags: explicit discovery connections, duplicate discovery information 00:19:55.410 sectype: none 00:19:55.410 =====Discovery Log Entry 1====== 00:19:55.410 trtype: tcp 00:19:55.410 adrfam: ipv4 00:19:55.410 subtype: nvme subsystem 00:19:55.410 treq: not required 00:19:55.410 portid: 0 00:19:55.410 trsvcid: 4420 00:19:55.410 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:55.410 traddr: 10.0.0.2 00:19:55.410 eflags: none 00:19:55.410 sectype: none 00:19:55.410 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:55.410 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:55.410 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:19:55.410 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:55.410 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:19:55.410 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:19:55.410 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:55.410 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:19:55.410 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:55.410 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:55.410 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:56.792 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:56.792 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:19:56.792 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:56.792 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:56.792 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:56.792 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:19:58.705 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:58.705 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:58.705 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:58.965 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:58.965 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:58.965 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:19:58.965 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:58.965 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:19:58.965 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:58.965 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:19:58.965 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:19:58.965 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:58.965 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:19:58.965 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:58.965 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:58.965 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:19:58.965 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:58.965 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:58.966 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:19:58.966 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:58.966 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:58.966 /dev/nvme0n2 ]] 00:19:58.966 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:58.966 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:58.966 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:19:58.966 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:58.966 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:19:59.226 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:19:59.226 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:59.226 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:19:59.226 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:59.226 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:59.226 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:19:59.226 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:59.226 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:59.226 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:19:59.226 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:59.226 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:59.226 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:59.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@99 -- # sync 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # set +e 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:59.486 rmmod nvme_tcp 00:19:59.486 rmmod nvme_fabrics 00:19:59.486 rmmod nvme_keyring 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:59.486 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # set -e 00:19:59.487 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # return 0 00:19:59.487 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # '[' -n 1951829 ']' 00:19:59.487 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@337 -- # killprocess 1951829 00:19:59.487 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1951829 ']' 00:19:59.487 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1951829 00:19:59.487 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:19:59.487 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.487 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1951829 00:19:59.747 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.747 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.747 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1951829' 00:19:59.747 killing process with pid 1951829 00:19:59.747 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1951829 00:19:59.747 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1951829 00:19:59.747 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:59.747 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # nvmf_fini 00:19:59.747 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@254 -- # local dev 00:19:59.747 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:59.747 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:59.747 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:59.747 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@121 -- # return 0 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # _dev=0 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # dev_map=() 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@274 -- # iptr 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # iptables-restore 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # iptables-save 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:02.350 00:20:02.350 real 0m16.423s 00:20:02.350 user 0m24.373s 00:20:02.350 sys 0m6.990s 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:02.350 ************************************ 00:20:02.350 END TEST nvmf_nvme_cli 00:20:02.350 ************************************ 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:02.350 ************************************ 00:20:02.350 START TEST nvmf_vfio_user 00:20:02.350 ************************************ 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:20:02.350 * Looking for test storage... 00:20:02.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:02.350 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:02.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.351 --rc genhtml_branch_coverage=1 00:20:02.351 --rc genhtml_function_coverage=1 00:20:02.351 --rc genhtml_legend=1 00:20:02.351 --rc geninfo_all_blocks=1 00:20:02.351 --rc geninfo_unexecuted_blocks=1 00:20:02.351 00:20:02.351 ' 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:02.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.351 --rc genhtml_branch_coverage=1 00:20:02.351 --rc genhtml_function_coverage=1 00:20:02.351 --rc genhtml_legend=1 00:20:02.351 --rc geninfo_all_blocks=1 00:20:02.351 --rc geninfo_unexecuted_blocks=1 00:20:02.351 00:20:02.351 ' 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:02.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.351 --rc genhtml_branch_coverage=1 00:20:02.351 --rc genhtml_function_coverage=1 00:20:02.351 --rc genhtml_legend=1 00:20:02.351 --rc geninfo_all_blocks=1 00:20:02.351 --rc geninfo_unexecuted_blocks=1 00:20:02.351 00:20:02.351 ' 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:02.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.351 --rc genhtml_branch_coverage=1 00:20:02.351 --rc genhtml_function_coverage=1 00:20:02.351 --rc genhtml_legend=1 00:20:02.351 --rc geninfo_all_blocks=1 00:20:02.351 --rc geninfo_unexecuted_blocks=1 00:20:02.351 00:20:02.351 ' 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@50 -- # : 0 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:02.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:02.351 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1953463 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1953463' 00:20:02.352 Process pid: 1953463 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1953463 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1953463 ']' 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:02.352 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:20:02.352 [2024-11-20 08:17:06.796494] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:20:02.352 [2024-11-20 08:17:06.796559] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.352 [2024-11-20 08:17:06.881956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.352 [2024-11-20 08:17:06.923891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.352 [2024-11-20 08:17:06.923928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.352 [2024-11-20 08:17:06.923936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.352 [2024-11-20 08:17:06.923943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.352 [2024-11-20 08:17:06.923949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.352 [2024-11-20 08:17:06.925783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.352 [2024-11-20 08:17:06.925957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.352 [2024-11-20 08:17:06.926319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.352 [2024-11-20 08:17:06.926321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.938 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.938 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:20:02.938 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:03.879 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:20:04.139 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:04.139 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:04.139 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:04.139 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:04.139 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:04.399 Malloc1 00:20:04.399 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:04.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:04.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:04.920 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:04.920 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:04.920 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:05.180 Malloc2 00:20:05.180 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:05.442 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:05.442 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:05.705 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:20:05.705 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:20:05.706 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:05.706 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:20:05.706 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:20:05.706 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:20:05.706 [2024-11-20 08:17:10.308318] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:20:05.706 [2024-11-20 08:17:10.308363] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1954164 ] 00:20:05.706 [2024-11-20 08:17:10.363019] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:20:05.706 [2024-11-20 08:17:10.371226] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:05.706 [2024-11-20 08:17:10.371251] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8486231000 00:20:05.706 [2024-11-20 08:17:10.372228] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:05.706 [2024-11-20 08:17:10.373216] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:05.706 [2024-11-20 08:17:10.374226] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:05.706 [2024-11-20 08:17:10.375233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:05.706 [2024-11-20 08:17:10.376235] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:05.706 [2024-11-20 08:17:10.377244] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:05.706 [2024-11-20 08:17:10.378244] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:05.706 [2024-11-20 08:17:10.379253] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:05.706 [2024-11-20 08:17:10.380261] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:05.706 [2024-11-20 08:17:10.380271] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8486226000 00:20:05.706 [2024-11-20 08:17:10.381598] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:05.706 [2024-11-20 08:17:10.403026] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:20:05.706 [2024-11-20 08:17:10.403064] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:20:05.706 [2024-11-20 08:17:10.405402] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:20:05.706 [2024-11-20 08:17:10.405451] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:20:05.706 [2024-11-20 08:17:10.405536] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:20:05.706 [2024-11-20 08:17:10.405554] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:20:05.706 [2024-11-20 08:17:10.405560] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:20:05.706 [2024-11-20 08:17:10.406399] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:20:05.706 [2024-11-20 08:17:10.406410] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:20:05.706 [2024-11-20 08:17:10.406418] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:20:05.706 [2024-11-20 08:17:10.407409] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:20:05.706 [2024-11-20 08:17:10.407420] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:20:05.706 [2024-11-20 08:17:10.407428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:20:05.706 [2024-11-20 08:17:10.408416] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:20:05.706 [2024-11-20 08:17:10.408426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:05.706 [2024-11-20 08:17:10.409424] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:20:05.706 [2024-11-20 08:17:10.409433] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:20:05.706 [2024-11-20 08:17:10.409438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:20:05.706 [2024-11-20 08:17:10.409445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:05.706 [2024-11-20 08:17:10.409554] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:20:05.706 [2024-11-20 08:17:10.409559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:05.706 [2024-11-20 08:17:10.409565] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:20:05.706 [2024-11-20 08:17:10.410428] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:20:05.706 [2024-11-20 08:17:10.411428] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:20:05.706 [2024-11-20 08:17:10.412439] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:20:05.706 [2024-11-20 08:17:10.413431] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:05.706 [2024-11-20 08:17:10.413487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:05.706 [2024-11-20 08:17:10.414448] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:20:05.706 [2024-11-20 08:17:10.414456] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:05.706 [2024-11-20 08:17:10.414462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:20:05.706 [2024-11-20 08:17:10.414483] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:20:05.706 [2024-11-20 08:17:10.414497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:20:05.706 [2024-11-20 08:17:10.414513] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:05.706 [2024-11-20 08:17:10.414519] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:05.706 [2024-11-20 08:17:10.414528] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:05.706 [2024-11-20 08:17:10.414542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:05.706 [2024-11-20 08:17:10.414579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:20:05.706 [2024-11-20 08:17:10.414590] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:20:05.706 [2024-11-20 08:17:10.414596] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:20:05.706 [2024-11-20 08:17:10.414603] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:20:05.706 [2024-11-20 08:17:10.414609] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:20:05.706 [2024-11-20 08:17:10.414617] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:20:05.706 [2024-11-20 08:17:10.414623] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:20:05.706 [2024-11-20 08:17:10.414629] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:20:05.706 [2024-11-20 08:17:10.414640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:20:05.706 [2024-11-20 08:17:10.414651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:20:05.706 [2024-11-20 08:17:10.414664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:20:05.706 [2024-11-20 08:17:10.414677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.706 [2024-11-20 08:17:10.414688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.706 [2024-11-20 08:17:10.414698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.706 [2024-11-20 08:17:10.414707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.706 [2024-11-20 08:17:10.414713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:20:05.706 [2024-11-20 08:17:10.414720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:05.706 [2024-11-20 08:17:10.414730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:20:05.706 [2024-11-20 08:17:10.414737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:20:05.707 [2024-11-20 08:17:10.414745] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:20:05.707 [2024-11-20 08:17:10.414752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.414759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.414765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.414776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:05.707 [2024-11-20 08:17:10.414783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:20:05.707 [2024-11-20 08:17:10.414846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.414854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.414874] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:20:05.707 [2024-11-20 08:17:10.414879] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:20:05.707 [2024-11-20 08:17:10.414883] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:05.707 [2024-11-20 08:17:10.414889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:20:05.707 [2024-11-20 08:17:10.414899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:20:05.707 [2024-11-20 08:17:10.414908] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:20:05.707 [2024-11-20 08:17:10.414917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.414925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.414932] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:05.707 [2024-11-20 08:17:10.414936] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:05.707 [2024-11-20 08:17:10.414940] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:05.707 [2024-11-20 08:17:10.414946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:05.707 [2024-11-20 08:17:10.414962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:20:05.707 [2024-11-20 08:17:10.414975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.414983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.414990] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:05.707 [2024-11-20 08:17:10.414995] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:05.707 [2024-11-20 08:17:10.414998] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:05.707 [2024-11-20 08:17:10.415004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:05.707 [2024-11-20 08:17:10.415014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:20:05.707 [2024-11-20 08:17:10.415022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.415029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.415039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.415045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.415050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.415055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.415061] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:20:05.707 [2024-11-20 08:17:10.415065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:20:05.707 [2024-11-20 08:17:10.415070] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:20:05.707 [2024-11-20 08:17:10.415088] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:20:05.707 [2024-11-20 08:17:10.415098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:20:05.707 [2024-11-20 08:17:10.415110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:20:05.707 [2024-11-20 08:17:10.415119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:20:05.707 [2024-11-20 08:17:10.415131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:20:05.707 [2024-11-20 08:17:10.415140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:20:05.707 [2024-11-20 08:17:10.415151] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:05.707 [2024-11-20 08:17:10.415164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:20:05.707 [2024-11-20 08:17:10.415178] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:20:05.707 [2024-11-20 08:17:10.415182] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:20:05.707 [2024-11-20 08:17:10.415186] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:20:05.707 [2024-11-20 08:17:10.415190] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:20:05.707 [2024-11-20 08:17:10.415193] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:20:05.707 [2024-11-20 08:17:10.415200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:20:05.707 [2024-11-20 08:17:10.415207] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:20:05.707 [2024-11-20 08:17:10.415212] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:20:05.707 [2024-11-20 08:17:10.415215] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:05.707 [2024-11-20 08:17:10.415221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:20:05.707 [2024-11-20 08:17:10.415229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:20:05.707 [2024-11-20 08:17:10.415233] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:05.707 [2024-11-20 08:17:10.415238] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:05.707 [2024-11-20 08:17:10.415244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:05.707 [2024-11-20 08:17:10.415253] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:20:05.707 [2024-11-20 08:17:10.415257] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:20:05.707 [2024-11-20 08:17:10.415261] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:05.707 [2024-11-20 08:17:10.415267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:20:05.707 [2024-11-20 08:17:10.415276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:20:05.707 [2024-11-20 08:17:10.415288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:20:05.707 [2024-11-20 08:17:10.415302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:20:05.707 [2024-11-20 08:17:10.415310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:20:05.707 ===================================================== 00:20:05.707 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:05.707 ===================================================== 00:20:05.707 Controller Capabilities/Features 00:20:05.707 ================================ 00:20:05.707 Vendor ID: 4e58 00:20:05.707 Subsystem Vendor ID: 4e58 00:20:05.707 Serial Number: SPDK1 00:20:05.707 Model Number: SPDK bdev Controller 00:20:05.707 Firmware Version: 25.01 00:20:05.707 Recommended Arb Burst: 6 00:20:05.707 IEEE OUI Identifier: 8d 6b 50 00:20:05.707 Multi-path I/O 00:20:05.707 May have multiple subsystem ports: Yes 00:20:05.707 May have multiple controllers: Yes 00:20:05.707 Associated with SR-IOV VF: No 00:20:05.707 Max Data Transfer Size: 131072 00:20:05.707 Max Number of Namespaces: 32 00:20:05.707 Max Number of I/O Queues: 127 00:20:05.707 NVMe Specification Version (VS): 1.3 00:20:05.707 NVMe Specification Version (Identify): 1.3 00:20:05.707 Maximum Queue Entries: 256 00:20:05.707 Contiguous Queues Required: Yes 00:20:05.707 Arbitration Mechanisms Supported 00:20:05.707 Weighted Round Robin: Not Supported 00:20:05.707 Vendor Specific: Not Supported 00:20:05.708 Reset Timeout: 15000 ms 00:20:05.708 Doorbell Stride: 4 bytes 00:20:05.708 NVM Subsystem Reset: Not Supported 00:20:05.708 Command Sets Supported 00:20:05.708 NVM Command Set: Supported 00:20:05.708 Boot Partition: Not Supported 00:20:05.708 Memory Page Size Minimum: 4096 bytes 00:20:05.708 Memory Page Size Maximum: 4096 bytes 00:20:05.708 Persistent Memory Region: Not Supported 00:20:05.708 Optional Asynchronous Events Supported 00:20:05.708 Namespace Attribute Notices: Supported 00:20:05.708 Firmware Activation Notices: Not Supported 00:20:05.708 ANA Change Notices: Not Supported 00:20:05.708 PLE Aggregate Log Change Notices: Not Supported 00:20:05.708 LBA Status Info Alert Notices: Not Supported 00:20:05.708 EGE Aggregate Log Change Notices: Not Supported 00:20:05.708 Normal NVM Subsystem Shutdown event: Not Supported 00:20:05.708 Zone Descriptor Change Notices: Not Supported 00:20:05.708 Discovery Log Change Notices: Not Supported 00:20:05.708 Controller Attributes 00:20:05.708 128-bit Host Identifier: Supported 00:20:05.708 Non-Operational Permissive Mode: Not Supported 00:20:05.708 NVM Sets: Not Supported 00:20:05.708 Read Recovery Levels: Not Supported 00:20:05.708 Endurance Groups: Not Supported 00:20:05.708 Predictable Latency Mode: Not Supported 00:20:05.708 Traffic Based Keep ALive: Not Supported 00:20:05.708 Namespace Granularity: Not Supported 00:20:05.708 SQ Associations: Not Supported 00:20:05.708 UUID List: Not Supported 00:20:05.708 Multi-Domain Subsystem: Not Supported 00:20:05.708 Fixed Capacity Management: Not Supported 00:20:05.708 Variable Capacity Management: Not Supported 00:20:05.708 Delete Endurance Group: Not Supported 00:20:05.708 Delete NVM Set: Not Supported 00:20:05.708 Extended LBA Formats Supported: Not Supported 00:20:05.708 Flexible Data Placement Supported: Not Supported 00:20:05.708 00:20:05.708 Controller Memory Buffer Support 00:20:05.708 ================================ 00:20:05.708 Supported: No 00:20:05.708 00:20:05.708 Persistent Memory Region Support 00:20:05.708 ================================ 00:20:05.708 Supported: No 00:20:05.708 00:20:05.708 Admin Command Set Attributes 00:20:05.708 ============================ 00:20:05.708 Security Send/Receive: Not Supported 00:20:05.708 Format NVM: Not Supported 00:20:05.708 Firmware Activate/Download: Not Supported 00:20:05.708 Namespace Management: Not Supported 00:20:05.708 Device Self-Test: Not Supported 00:20:05.708 Directives: Not Supported 00:20:05.708 NVMe-MI: Not Supported 00:20:05.708 Virtualization Management: Not Supported 00:20:05.708 Doorbell Buffer Config: Not Supported 00:20:05.708 Get LBA Status Capability: Not Supported 00:20:05.708 Command & Feature Lockdown Capability: Not Supported 00:20:05.708 Abort Command Limit: 4 00:20:05.708 Async Event Request Limit: 4 00:20:05.708 Number of Firmware Slots: N/A 00:20:05.708 Firmware Slot 1 Read-Only: N/A 00:20:05.708 Firmware Activation Without Reset: N/A 00:20:05.708 Multiple Update Detection Support: N/A 00:20:05.708 Firmware Update Granularity: No Information Provided 00:20:05.708 Per-Namespace SMART Log: No 00:20:05.708 Asymmetric Namespace Access Log Page: Not Supported 00:20:05.708 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:20:05.708 Command Effects Log Page: Supported 00:20:05.708 Get Log Page Extended Data: Supported 00:20:05.708 Telemetry Log Pages: Not Supported 00:20:05.708 Persistent Event Log Pages: Not Supported 00:20:05.708 Supported Log Pages Log Page: May Support 00:20:05.708 Commands Supported & Effects Log Page: Not Supported 00:20:05.708 Feature Identifiers & Effects Log Page:May Support 00:20:05.708 NVMe-MI Commands & Effects Log Page: May Support 00:20:05.708 Data Area 4 for Telemetry Log: Not Supported 00:20:05.708 Error Log Page Entries Supported: 128 00:20:05.708 Keep Alive: Supported 00:20:05.708 Keep Alive Granularity: 10000 ms 00:20:05.708 00:20:05.708 NVM Command Set Attributes 00:20:05.708 ========================== 00:20:05.708 Submission Queue Entry Size 00:20:05.708 Max: 64 00:20:05.708 Min: 64 00:20:05.708 Completion Queue Entry Size 00:20:05.708 Max: 16 00:20:05.708 Min: 16 00:20:05.708 Number of Namespaces: 32 00:20:05.708 Compare Command: Supported 00:20:05.708 Write Uncorrectable Command: Not Supported 00:20:05.708 Dataset Management Command: Supported 00:20:05.708 Write Zeroes Command: Supported 00:20:05.708 Set Features Save Field: Not Supported 00:20:05.708 Reservations: Not Supported 00:20:05.708 Timestamp: Not Supported 00:20:05.708 Copy: Supported 00:20:05.708 Volatile Write Cache: Present 00:20:05.708 Atomic Write Unit (Normal): 1 00:20:05.708 Atomic Write Unit (PFail): 1 00:20:05.708 Atomic Compare & Write Unit: 1 00:20:05.708 Fused Compare & Write: Supported 00:20:05.708 Scatter-Gather List 00:20:05.708 SGL Command Set: Supported (Dword aligned) 00:20:05.708 SGL Keyed: Not Supported 00:20:05.708 SGL Bit Bucket Descriptor: Not Supported 00:20:05.708 SGL Metadata Pointer: Not Supported 00:20:05.708 Oversized SGL: Not Supported 00:20:05.708 SGL Metadata Address: Not Supported 00:20:05.708 SGL Offset: Not Supported 00:20:05.708 Transport SGL Data Block: Not Supported 00:20:05.708 Replay Protected Memory Block: Not Supported 00:20:05.708 00:20:05.708 Firmware Slot Information 00:20:05.708 ========================= 00:20:05.708 Active slot: 1 00:20:05.708 Slot 1 Firmware Revision: 25.01 00:20:05.708 00:20:05.708 00:20:05.708 Commands Supported and Effects 00:20:05.708 ============================== 00:20:05.708 Admin Commands 00:20:05.708 -------------- 00:20:05.708 Get Log Page (02h): Supported 00:20:05.708 Identify (06h): Supported 00:20:05.708 Abort (08h): Supported 00:20:05.708 Set Features (09h): Supported 00:20:05.708 Get Features (0Ah): Supported 00:20:05.708 Asynchronous Event Request (0Ch): Supported 00:20:05.708 Keep Alive (18h): Supported 00:20:05.708 I/O Commands 00:20:05.708 ------------ 00:20:05.708 Flush (00h): Supported LBA-Change 00:20:05.708 Write (01h): Supported LBA-Change 00:20:05.708 Read (02h): Supported 00:20:05.708 Compare (05h): Supported 00:20:05.708 Write Zeroes (08h): Supported LBA-Change 00:20:05.708 Dataset Management (09h): Supported LBA-Change 00:20:05.708 Copy (19h): Supported LBA-Change 00:20:05.708 00:20:05.708 Error Log 00:20:05.708 ========= 00:20:05.708 00:20:05.708 Arbitration 00:20:05.708 =========== 00:20:05.708 Arbitration Burst: 1 00:20:05.708 00:20:05.708 Power Management 00:20:05.708 ================ 00:20:05.708 Number of Power States: 1 00:20:05.708 Current Power State: Power State #0 00:20:05.708 Power State #0: 00:20:05.708 Max Power: 0.00 W 00:20:05.708 Non-Operational State: Operational 00:20:05.708 Entry Latency: Not Reported 00:20:05.708 Exit Latency: Not Reported 00:20:05.708 Relative Read Throughput: 0 00:20:05.708 Relative Read Latency: 0 00:20:05.708 Relative Write Throughput: 0 00:20:05.708 Relative Write Latency: 0 00:20:05.708 Idle Power: Not Reported 00:20:05.708 Active Power: Not Reported 00:20:05.708 Non-Operational Permissive Mode: Not Supported 00:20:05.708 00:20:05.708 Health Information 00:20:05.708 ================== 00:20:05.708 Critical Warnings: 00:20:05.708 Available Spare Space: OK 00:20:05.708 Temperature: OK 00:20:05.708 Device Reliability: OK 00:20:05.708 Read Only: No 00:20:05.708 Volatile Memory Backup: OK 00:20:05.708 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:05.708 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:05.708 Available Spare: 0% 00:20:05.708 Available Sp[2024-11-20 08:17:10.415414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:20:05.708 [2024-11-20 08:17:10.415423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:20:05.708 [2024-11-20 08:17:10.415452] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:20:05.708 [2024-11-20 08:17:10.415462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.708 [2024-11-20 08:17:10.415470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.708 [2024-11-20 08:17:10.415476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.708 [2024-11-20 08:17:10.415482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.708 [2024-11-20 08:17:10.417870] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:20:05.708 [2024-11-20 08:17:10.417882] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:20:05.709 [2024-11-20 08:17:10.418461] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:05.709 [2024-11-20 08:17:10.418503] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:20:05.709 [2024-11-20 08:17:10.418510] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:20:05.709 [2024-11-20 08:17:10.419473] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:20:05.709 [2024-11-20 08:17:10.419485] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:20:05.709 [2024-11-20 08:17:10.419548] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:20:05.709 [2024-11-20 08:17:10.422871] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:05.970 are Threshold: 0% 00:20:05.970 Life Percentage Used: 0% 00:20:05.970 Data Units Read: 0 00:20:05.970 Data Units Written: 0 00:20:05.970 Host Read Commands: 0 00:20:05.970 Host Write Commands: 0 00:20:05.970 Controller Busy Time: 0 minutes 00:20:05.970 Power Cycles: 0 00:20:05.970 Power On Hours: 0 hours 00:20:05.970 Unsafe Shutdowns: 0 00:20:05.970 Unrecoverable Media Errors: 0 00:20:05.970 Lifetime Error Log Entries: 0 00:20:05.970 Warning Temperature Time: 0 minutes 00:20:05.970 Critical Temperature Time: 0 minutes 00:20:05.970 00:20:05.970 Number of Queues 00:20:05.970 ================ 00:20:05.970 Number of I/O Submission Queues: 127 00:20:05.970 Number of I/O Completion Queues: 127 00:20:05.970 00:20:05.970 Active Namespaces 00:20:05.970 ================= 00:20:05.970 Namespace ID:1 00:20:05.970 Error Recovery Timeout: Unlimited 00:20:05.970 Command Set Identifier: NVM (00h) 00:20:05.970 Deallocate: Supported 00:20:05.970 Deallocated/Unwritten Error: Not Supported 00:20:05.970 Deallocated Read Value: Unknown 00:20:05.970 Deallocate in Write Zeroes: Not Supported 00:20:05.970 Deallocated Guard Field: 0xFFFF 00:20:05.970 Flush: Supported 00:20:05.970 Reservation: Supported 00:20:05.970 Namespace Sharing Capabilities: Multiple Controllers 00:20:05.970 Size (in LBAs): 131072 (0GiB) 00:20:05.970 Capacity (in LBAs): 131072 (0GiB) 00:20:05.970 Utilization (in LBAs): 131072 (0GiB) 00:20:05.970 NGUID: 152473F23D424E95BF97AEFE954C605E 00:20:05.970 UUID: 152473f2-3d42-4e95-bf97-aefe954c605e 00:20:05.970 Thin Provisioning: Not Supported 00:20:05.970 Per-NS Atomic Units: Yes 00:20:05.970 Atomic Boundary Size (Normal): 0 00:20:05.970 Atomic Boundary Size (PFail): 0 00:20:05.970 Atomic Boundary Offset: 0 00:20:05.970 Maximum Single Source Range Length: 65535 00:20:05.970 Maximum Copy Length: 65535 00:20:05.970 Maximum Source Range Count: 1 00:20:05.970 NGUID/EUI64 Never Reused: No 00:20:05.970 Namespace Write Protected: No 00:20:05.970 Number of LBA Formats: 1 00:20:05.970 Current LBA Format: LBA Format #00 00:20:05.970 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:05.970 00:20:05.970 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:20:05.970 [2024-11-20 08:17:10.629566] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:11.254 Initializing NVMe Controllers 00:20:11.254 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:11.254 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:20:11.254 Initialization complete. Launching workers. 00:20:11.254 ======================================================== 00:20:11.254 Latency(us) 00:20:11.254 Device Information : IOPS MiB/s Average min max 00:20:11.254 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39960.81 156.10 3203.01 846.16 9787.44 00:20:11.254 ======================================================== 00:20:11.254 Total : 39960.81 156.10 3203.01 846.16 9787.44 00:20:11.254 00:20:11.254 [2024-11-20 08:17:15.647232] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:11.254 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:11.254 [2024-11-20 08:17:15.840133] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:16.540 Initializing NVMe Controllers 00:20:16.540 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:16.540 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:20:16.540 Initialization complete. Launching workers. 00:20:16.540 ======================================================== 00:20:16.540 Latency(us) 00:20:16.540 Device Information : IOPS MiB/s Average min max 00:20:16.540 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16055.86 62.72 7977.71 6981.56 8980.74 00:20:16.540 ======================================================== 00:20:16.540 Total : 16055.86 62.72 7977.71 6981.56 8980.74 00:20:16.540 00:20:16.540 [2024-11-20 08:17:20.882096] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:16.540 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:16.540 [2024-11-20 08:17:21.092022] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:21.826 [2024-11-20 08:17:26.169075] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:21.826 Initializing NVMe Controllers 00:20:21.826 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:21.826 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:21.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:20:21.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:20:21.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:20:21.826 Initialization complete. Launching workers. 00:20:21.826 Starting thread on core 2 00:20:21.826 Starting thread on core 3 00:20:21.826 Starting thread on core 1 00:20:21.826 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:20:21.826 [2024-11-20 08:17:26.459210] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:25.124 [2024-11-20 08:17:29.515298] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:25.124 Initializing NVMe Controllers 00:20:25.124 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:25.124 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:25.124 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:20:25.124 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:20:25.124 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:20:25.124 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:20:25.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:25.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:25.124 Initialization complete. Launching workers. 00:20:25.124 Starting thread on core 1 with urgent priority queue 00:20:25.124 Starting thread on core 2 with urgent priority queue 00:20:25.124 Starting thread on core 3 with urgent priority queue 00:20:25.124 Starting thread on core 0 with urgent priority queue 00:20:25.124 SPDK bdev Controller (SPDK1 ) core 0: 10573.00 IO/s 9.46 secs/100000 ios 00:20:25.124 SPDK bdev Controller (SPDK1 ) core 1: 14697.67 IO/s 6.80 secs/100000 ios 00:20:25.124 SPDK bdev Controller (SPDK1 ) core 2: 9288.33 IO/s 10.77 secs/100000 ios 00:20:25.124 SPDK bdev Controller (SPDK1 ) core 3: 11801.00 IO/s 8.47 secs/100000 ios 00:20:25.124 ======================================================== 00:20:25.124 00:20:25.124 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:20:25.124 [2024-11-20 08:17:29.812460] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:25.124 Initializing NVMe Controllers 00:20:25.124 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:25.124 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:25.124 Namespace ID: 1 size: 0GB 00:20:25.124 Initialization complete. 00:20:25.124 INFO: using host memory buffer for IO 00:20:25.124 Hello world! 00:20:25.124 [2024-11-20 08:17:29.846631] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:25.384 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:20:25.644 [2024-11-20 08:17:30.140473] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:26.584 Initializing NVMe Controllers 00:20:26.584 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:26.584 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:26.584 Initialization complete. Launching workers. 00:20:26.584 submit (in ns) avg, min, max = 8089.7, 3897.5, 3999701.7 00:20:26.584 complete (in ns) avg, min, max = 19304.1, 2390.0, 5993065.0 00:20:26.584 00:20:26.584 Submit histogram 00:20:26.584 ================ 00:20:26.584 Range in us Cumulative Count 00:20:26.585 3.893 - 3.920: 0.8887% ( 167) 00:20:26.585 3.920 - 3.947: 4.9119% ( 756) 00:20:26.585 3.947 - 3.973: 13.8417% ( 1678) 00:20:26.585 3.973 - 4.000: 25.7198% ( 2232) 00:20:26.585 4.000 - 4.027: 38.1779% ( 2341) 00:20:26.585 4.027 - 4.053: 51.6950% ( 2540) 00:20:26.585 4.053 - 4.080: 68.0592% ( 3075) 00:20:26.585 4.080 - 4.107: 82.8588% ( 2781) 00:20:26.585 4.107 - 4.133: 92.0387% ( 1725) 00:20:26.585 4.133 - 4.160: 96.4877% ( 836) 00:20:26.585 4.160 - 4.187: 98.3875% ( 357) 00:20:26.585 4.187 - 4.213: 99.0687% ( 128) 00:20:26.585 4.213 - 4.240: 99.3348% ( 50) 00:20:26.585 4.240 - 4.267: 99.4199% ( 16) 00:20:26.585 4.267 - 4.293: 99.4465% ( 5) 00:20:26.585 4.293 - 4.320: 99.4572% ( 2) 00:20:26.585 4.320 - 4.347: 99.4625% ( 1) 00:20:26.585 4.560 - 4.587: 99.4678% ( 1) 00:20:26.585 4.693 - 4.720: 99.4732% ( 1) 00:20:26.585 4.720 - 4.747: 99.4785% ( 1) 00:20:26.585 5.067 - 5.093: 99.4838% ( 1) 00:20:26.585 5.173 - 5.200: 99.4891% ( 1) 00:20:26.585 5.280 - 5.307: 99.4944% ( 1) 00:20:26.585 5.413 - 5.440: 99.4998% ( 1) 00:20:26.585 5.440 - 5.467: 99.5051% ( 1) 00:20:26.585 5.653 - 5.680: 99.5104% ( 1) 00:20:26.585 5.867 - 5.893: 99.5157% ( 1) 00:20:26.585 5.947 - 5.973: 99.5264% ( 2) 00:20:26.585 6.080 - 6.107: 99.5370% ( 2) 00:20:26.585 6.160 - 6.187: 99.5477% ( 2) 00:20:26.585 6.187 - 6.213: 99.5530% ( 1) 00:20:26.585 6.213 - 6.240: 99.5583% ( 1) 00:20:26.585 6.267 - 6.293: 99.5743% ( 3) 00:20:26.585 6.293 - 6.320: 99.6009% ( 5) 00:20:26.585 6.320 - 6.347: 99.6062% ( 1) 00:20:26.585 6.347 - 6.373: 99.6115% ( 1) 00:20:26.585 6.373 - 6.400: 99.6275% ( 3) 00:20:26.585 6.400 - 6.427: 99.6381% ( 2) 00:20:26.585 6.427 - 6.453: 99.6434% ( 1) 00:20:26.585 6.453 - 6.480: 99.6701% ( 5) 00:20:26.585 6.480 - 6.507: 99.6807% ( 2) 00:20:26.585 6.507 - 6.533: 99.6860% ( 1) 00:20:26.585 6.533 - 6.560: 99.6967% ( 2) 00:20:26.585 6.560 - 6.587: 99.7073% ( 2) 00:20:26.585 6.587 - 6.613: 99.7126% ( 1) 00:20:26.585 6.613 - 6.640: 99.7180% ( 1) 00:20:26.585 6.640 - 6.667: 99.7233% ( 1) 00:20:26.585 6.667 - 6.693: 99.7286% ( 1) 00:20:26.585 6.693 - 6.720: 99.7339% ( 1) 00:20:26.585 6.720 - 6.747: 99.7499% ( 3) 00:20:26.585 6.747 - 6.773: 99.7552% ( 1) 00:20:26.585 6.773 - 6.800: 99.7658% ( 2) 00:20:26.585 6.800 - 6.827: 99.7712% ( 1) 00:20:26.585 6.827 - 6.880: 99.7765% ( 1) 00:20:26.585 6.880 - 6.933: 99.7978% ( 4) 00:20:26.585 6.987 - 7.040: 99.8084% ( 2) 00:20:26.585 7.093 - 7.147: 99.8297% ( 4) 00:20:26.585 7.147 - 7.200: 99.8403% ( 2) 00:20:26.585 7.200 - 7.253: 99.8457% ( 1) 00:20:26.585 7.253 - 7.307: 99.8563% ( 2) 00:20:26.585 7.307 - 7.360: 99.8616% ( 1) 00:20:26.585 7.360 - 7.413: 99.8670% ( 1) 00:20:26.585 7.467 - 7.520: 99.8723% ( 1) 00:20:26.585 7.787 - 7.840: 99.8776% ( 1) 00:20:26.585 8.053 - 8.107: 99.8829% ( 1) 00:20:26.585 8.587 - 8.640: 99.8882% ( 1) 00:20:26.585 13.387 - 13.440: 99.8936% ( 1) 00:20:26.585 45.227 - 45.440: 99.8989% ( 1) 00:20:26.585 3822.933 - 3850.240: 99.9042% ( 1) 00:20:26.585 3986.773 - 4014.080: 100.0000% ( 18) 00:20:26.585 00:20:26.585 Complete histogram 00:20:26.585 ================== 00:20:26.585 Range in us Cumulative Count 00:20:26.585 2.387 - 2.400: 0.2874% ( 54) 00:20:26.585 2.400 - 2.413: 1.5486% ( 237) 00:20:26.585 2.413 - 2.427: 1.7242% ( 33) 00:20:26.585 2.427 - 2.440: 2.0116% ( 54) 00:20:26.585 2.440 - 2.453: 2.0808% ( 13) 00:20:26.585 2.453 - 2.467: 4.1509% ( 389) 00:20:26.585 2.467 - 2.480: 43.9679% ( 7482) 00:20:26.585 2.480 - [2024-11-20 08:17:31.160097] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:26.585 2.493: 60.7152% ( 3147) 00:20:26.585 2.493 - 2.507: 71.2149% ( 1973) 00:20:26.585 2.507 - 2.520: 77.7926% ( 1236) 00:20:26.585 2.520 - 2.533: 80.7674% ( 559) 00:20:26.585 2.533 - 2.547: 84.8012% ( 758) 00:20:26.585 2.547 - 2.560: 91.3310% ( 1227) 00:20:26.585 2.560 - 2.573: 95.4553% ( 775) 00:20:26.585 2.573 - 2.587: 97.4669% ( 378) 00:20:26.585 2.587 - 2.600: 98.6589% ( 224) 00:20:26.585 2.600 - 2.613: 99.1698% ( 96) 00:20:26.585 2.613 - 2.627: 99.3508% ( 34) 00:20:26.585 2.627 - 2.640: 99.3774% ( 5) 00:20:26.585 4.480 - 4.507: 99.3827% ( 1) 00:20:26.585 4.560 - 4.587: 99.3880% ( 1) 00:20:26.585 4.587 - 4.613: 99.3933% ( 1) 00:20:26.585 4.667 - 4.693: 99.3986% ( 1) 00:20:26.585 4.693 - 4.720: 99.4040% ( 1) 00:20:26.585 4.773 - 4.800: 99.4253% ( 4) 00:20:26.585 4.800 - 4.827: 99.4306% ( 1) 00:20:26.585 4.827 - 4.853: 99.4412% ( 2) 00:20:26.585 4.907 - 4.933: 99.4519% ( 2) 00:20:26.585 4.987 - 5.013: 99.4572% ( 1) 00:20:26.585 5.040 - 5.067: 99.4785% ( 4) 00:20:26.585 5.067 - 5.093: 99.4838% ( 1) 00:20:26.585 5.093 - 5.120: 99.4944% ( 2) 00:20:26.585 5.173 - 5.200: 99.4998% ( 1) 00:20:26.585 5.360 - 5.387: 99.5104% ( 2) 00:20:26.585 5.413 - 5.440: 99.5157% ( 1) 00:20:26.585 5.440 - 5.467: 99.5264% ( 2) 00:20:26.585 5.600 - 5.627: 99.5317% ( 1) 00:20:26.585 5.627 - 5.653: 99.5370% ( 1) 00:20:26.585 5.813 - 5.840: 99.5423% ( 1) 00:20:26.585 5.947 - 5.973: 99.5477% ( 1) 00:20:26.585 6.107 - 6.133: 99.5530% ( 1) 00:20:26.585 6.187 - 6.213: 99.5583% ( 1) 00:20:26.585 6.267 - 6.293: 99.5636% ( 1) 00:20:26.585 10.987 - 11.040: 99.5689% ( 1) 00:20:26.585 11.093 - 11.147: 99.5743% ( 1) 00:20:26.585 12.427 - 12.480: 99.5796% ( 1) 00:20:26.585 2007.040 - 2020.693: 99.5849% ( 1) 00:20:26.585 2143.573 - 2157.227: 99.5902% ( 1) 00:20:26.585 3986.773 - 4014.080: 99.9894% ( 75) 00:20:26.585 5980.160 - 6007.467: 100.0000% ( 2) 00:20:26.585 00:20:26.585 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:20:26.585 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:20:26.585 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:20:26.585 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:20:26.585 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:26.846 [ 00:20:26.846 { 00:20:26.846 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:26.846 "subtype": "Discovery", 00:20:26.846 "listen_addresses": [], 00:20:26.846 "allow_any_host": true, 00:20:26.846 "hosts": [] 00:20:26.846 }, 00:20:26.846 { 00:20:26.846 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:26.846 "subtype": "NVMe", 00:20:26.846 "listen_addresses": [ 00:20:26.846 { 00:20:26.846 "trtype": "VFIOUSER", 00:20:26.846 "adrfam": "IPv4", 00:20:26.846 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:26.846 "trsvcid": "0" 00:20:26.846 } 00:20:26.846 ], 00:20:26.846 "allow_any_host": true, 00:20:26.846 "hosts": [], 00:20:26.846 "serial_number": "SPDK1", 00:20:26.846 "model_number": "SPDK bdev Controller", 00:20:26.846 "max_namespaces": 32, 00:20:26.846 "min_cntlid": 1, 00:20:26.846 "max_cntlid": 65519, 00:20:26.846 "namespaces": [ 00:20:26.846 { 00:20:26.846 "nsid": 1, 00:20:26.846 "bdev_name": "Malloc1", 00:20:26.846 "name": "Malloc1", 00:20:26.846 "nguid": "152473F23D424E95BF97AEFE954C605E", 00:20:26.846 "uuid": "152473f2-3d42-4e95-bf97-aefe954c605e" 00:20:26.846 } 00:20:26.846 ] 00:20:26.846 }, 00:20:26.846 { 00:20:26.846 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:26.846 "subtype": "NVMe", 00:20:26.846 "listen_addresses": [ 00:20:26.846 { 00:20:26.846 "trtype": "VFIOUSER", 00:20:26.846 "adrfam": "IPv4", 00:20:26.846 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:26.846 "trsvcid": "0" 00:20:26.846 } 00:20:26.846 ], 00:20:26.846 "allow_any_host": true, 00:20:26.846 "hosts": [], 00:20:26.846 "serial_number": "SPDK2", 00:20:26.846 "model_number": "SPDK bdev Controller", 00:20:26.846 "max_namespaces": 32, 00:20:26.846 "min_cntlid": 1, 00:20:26.846 "max_cntlid": 65519, 00:20:26.846 "namespaces": [ 00:20:26.846 { 00:20:26.846 "nsid": 1, 00:20:26.846 "bdev_name": "Malloc2", 00:20:26.846 "name": "Malloc2", 00:20:26.846 "nguid": "3E39C87A600945D7807796CE9FEB5A22", 00:20:26.846 "uuid": "3e39c87a-6009-45d7-8077-96ce9feb5a22" 00:20:26.846 } 00:20:26.846 ] 00:20:26.846 } 00:20:26.846 ] 00:20:26.846 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:26.846 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1958318 00:20:26.846 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:26.846 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:20:26.846 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:20:26.846 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:26.846 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:26.846 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:20:26.846 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:26.846 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:20:26.846 Malloc3 00:20:27.106 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:20:27.106 [2024-11-20 08:17:31.595165] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:27.106 [2024-11-20 08:17:31.740145] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:27.106 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:27.106 Asynchronous Event Request test 00:20:27.106 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:27.106 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:27.106 Registering asynchronous event callbacks... 00:20:27.106 Starting namespace attribute notice tests for all controllers... 00:20:27.106 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:27.106 aer_cb - Changed Namespace 00:20:27.106 Cleaning up... 00:20:27.368 [ 00:20:27.368 { 00:20:27.368 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:27.368 "subtype": "Discovery", 00:20:27.368 "listen_addresses": [], 00:20:27.368 "allow_any_host": true, 00:20:27.368 "hosts": [] 00:20:27.368 }, 00:20:27.368 { 00:20:27.368 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:27.368 "subtype": "NVMe", 00:20:27.368 "listen_addresses": [ 00:20:27.368 { 00:20:27.368 "trtype": "VFIOUSER", 00:20:27.368 "adrfam": "IPv4", 00:20:27.368 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:27.368 "trsvcid": "0" 00:20:27.368 } 00:20:27.368 ], 00:20:27.368 "allow_any_host": true, 00:20:27.368 "hosts": [], 00:20:27.368 "serial_number": "SPDK1", 00:20:27.368 "model_number": "SPDK bdev Controller", 00:20:27.368 "max_namespaces": 32, 00:20:27.368 "min_cntlid": 1, 00:20:27.368 "max_cntlid": 65519, 00:20:27.368 "namespaces": [ 00:20:27.368 { 00:20:27.368 "nsid": 1, 00:20:27.368 "bdev_name": "Malloc1", 00:20:27.368 "name": "Malloc1", 00:20:27.368 "nguid": "152473F23D424E95BF97AEFE954C605E", 00:20:27.368 "uuid": "152473f2-3d42-4e95-bf97-aefe954c605e" 00:20:27.368 }, 00:20:27.368 { 00:20:27.368 "nsid": 2, 00:20:27.368 "bdev_name": "Malloc3", 00:20:27.368 "name": "Malloc3", 00:20:27.368 "nguid": "3EF254DD9B3A40DC9E828B3F8A8A6005", 00:20:27.368 "uuid": "3ef254dd-9b3a-40dc-9e82-8b3f8a8a6005" 00:20:27.368 } 00:20:27.368 ] 00:20:27.368 }, 00:20:27.368 { 00:20:27.368 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:27.368 "subtype": "NVMe", 00:20:27.368 "listen_addresses": [ 00:20:27.368 { 00:20:27.368 "trtype": "VFIOUSER", 00:20:27.368 "adrfam": "IPv4", 00:20:27.368 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:27.368 "trsvcid": "0" 00:20:27.368 } 00:20:27.368 ], 00:20:27.368 "allow_any_host": true, 00:20:27.368 "hosts": [], 00:20:27.368 "serial_number": "SPDK2", 00:20:27.368 "model_number": "SPDK bdev Controller", 00:20:27.368 "max_namespaces": 32, 00:20:27.368 "min_cntlid": 1, 00:20:27.368 "max_cntlid": 65519, 00:20:27.368 "namespaces": [ 00:20:27.368 { 00:20:27.368 "nsid": 1, 00:20:27.368 "bdev_name": "Malloc2", 00:20:27.368 "name": "Malloc2", 00:20:27.368 "nguid": "3E39C87A600945D7807796CE9FEB5A22", 00:20:27.368 "uuid": "3e39c87a-6009-45d7-8077-96ce9feb5a22" 00:20:27.368 } 00:20:27.368 ] 00:20:27.368 } 00:20:27.368 ] 00:20:27.368 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1958318 00:20:27.368 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:27.368 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:27.368 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:20:27.368 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:20:27.368 [2024-11-20 08:17:31.981061] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:20:27.368 [2024-11-20 08:17:31.981113] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1958520 ] 00:20:27.368 [2024-11-20 08:17:32.036924] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:20:27.368 [2024-11-20 08:17:32.039151] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:27.368 [2024-11-20 08:17:32.039176] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa44ddce000 00:20:27.368 [2024-11-20 08:17:32.040160] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:27.368 [2024-11-20 08:17:32.041164] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:27.368 [2024-11-20 08:17:32.042174] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:27.368 [2024-11-20 08:17:32.043182] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:27.368 [2024-11-20 08:17:32.044188] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:27.368 [2024-11-20 08:17:32.045194] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:27.368 [2024-11-20 08:17:32.046200] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:27.368 [2024-11-20 08:17:32.047205] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:27.368 [2024-11-20 08:17:32.048210] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:27.368 [2024-11-20 08:17:32.048220] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa44ddc3000 00:20:27.368 [2024-11-20 08:17:32.049544] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:27.368 [2024-11-20 08:17:32.069020] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:20:27.368 [2024-11-20 08:17:32.069047] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:20:27.368 [2024-11-20 08:17:32.071100] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:20:27.368 [2024-11-20 08:17:32.071144] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:20:27.368 [2024-11-20 08:17:32.071224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:20:27.368 [2024-11-20 08:17:32.071237] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:20:27.368 [2024-11-20 08:17:32.071242] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:20:27.368 [2024-11-20 08:17:32.072103] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:20:27.368 [2024-11-20 08:17:32.072113] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:20:27.369 [2024-11-20 08:17:32.072120] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:20:27.369 [2024-11-20 08:17:32.073109] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:20:27.369 [2024-11-20 08:17:32.073118] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:20:27.369 [2024-11-20 08:17:32.073126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:20:27.369 [2024-11-20 08:17:32.074111] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:20:27.369 [2024-11-20 08:17:32.074120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:27.369 [2024-11-20 08:17:32.075116] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:20:27.369 [2024-11-20 08:17:32.075125] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:20:27.369 [2024-11-20 08:17:32.075130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:20:27.369 [2024-11-20 08:17:32.075137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:27.369 [2024-11-20 08:17:32.075245] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:20:27.369 [2024-11-20 08:17:32.075250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:27.369 [2024-11-20 08:17:32.075255] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:20:27.369 [2024-11-20 08:17:32.076121] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:20:27.369 [2024-11-20 08:17:32.077124] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:20:27.369 [2024-11-20 08:17:32.078131] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:20:27.369 [2024-11-20 08:17:32.079133] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:27.369 [2024-11-20 08:17:32.079173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:27.369 [2024-11-20 08:17:32.080143] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:20:27.369 [2024-11-20 08:17:32.080152] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:27.369 [2024-11-20 08:17:32.080158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:20:27.369 [2024-11-20 08:17:32.080179] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:20:27.369 [2024-11-20 08:17:32.080190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:20:27.369 [2024-11-20 08:17:32.080203] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:27.369 [2024-11-20 08:17:32.080208] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:27.369 [2024-11-20 08:17:32.080212] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:27.369 [2024-11-20 08:17:32.080224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:27.369 [2024-11-20 08:17:32.090870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:20:27.369 [2024-11-20 08:17:32.090882] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:20:27.369 [2024-11-20 08:17:32.090887] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:20:27.369 [2024-11-20 08:17:32.090891] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:20:27.369 [2024-11-20 08:17:32.090896] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:20:27.369 [2024-11-20 08:17:32.090904] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:20:27.369 [2024-11-20 08:17:32.090908] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:20:27.369 [2024-11-20 08:17:32.090913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:20:27.369 [2024-11-20 08:17:32.090922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:20:27.369 [2024-11-20 08:17:32.090932] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:20:27.632 [2024-11-20 08:17:32.098868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:20:27.632 [2024-11-20 08:17:32.098884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.632 [2024-11-20 08:17:32.098894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.632 [2024-11-20 08:17:32.098903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.632 [2024-11-20 08:17:32.098911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.632 [2024-11-20 08:17:32.098916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.098923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.098932] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:20:27.632 [2024-11-20 08:17:32.106867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:20:27.632 [2024-11-20 08:17:32.106877] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:20:27.632 [2024-11-20 08:17:32.106883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.106890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.106895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.106904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:27.632 [2024-11-20 08:17:32.114866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:20:27.632 [2024-11-20 08:17:32.114931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.114939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.114947] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:20:27.632 [2024-11-20 08:17:32.114952] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:20:27.632 [2024-11-20 08:17:32.114955] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:27.632 [2024-11-20 08:17:32.114961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:20:27.632 [2024-11-20 08:17:32.122868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:20:27.632 [2024-11-20 08:17:32.122879] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:20:27.632 [2024-11-20 08:17:32.122890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.122898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.122905] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:27.632 [2024-11-20 08:17:32.122911] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:27.632 [2024-11-20 08:17:32.122915] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:27.632 [2024-11-20 08:17:32.122921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:27.632 [2024-11-20 08:17:32.130868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:20:27.632 [2024-11-20 08:17:32.130882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.130890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.130898] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:27.632 [2024-11-20 08:17:32.130903] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:27.632 [2024-11-20 08:17:32.130906] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:27.632 [2024-11-20 08:17:32.130912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:27.632 [2024-11-20 08:17:32.138868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:20:27.632 [2024-11-20 08:17:32.138878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.138885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.138893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.138898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.138904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.138909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.138914] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:20:27.632 [2024-11-20 08:17:32.138919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:20:27.632 [2024-11-20 08:17:32.138924] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:20:27.632 [2024-11-20 08:17:32.138940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:20:27.632 [2024-11-20 08:17:32.146868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:20:27.632 [2024-11-20 08:17:32.146882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:20:27.632 [2024-11-20 08:17:32.154866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:20:27.632 [2024-11-20 08:17:32.154879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:20:27.632 [2024-11-20 08:17:32.162868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:20:27.632 [2024-11-20 08:17:32.162883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:27.632 [2024-11-20 08:17:32.170869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:20:27.632 [2024-11-20 08:17:32.170884] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:20:27.632 [2024-11-20 08:17:32.170889] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:20:27.632 [2024-11-20 08:17:32.170893] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:20:27.632 [2024-11-20 08:17:32.170896] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:20:27.632 [2024-11-20 08:17:32.170900] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:20:27.632 [2024-11-20 08:17:32.170906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:20:27.632 [2024-11-20 08:17:32.170914] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:20:27.632 [2024-11-20 08:17:32.170918] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:20:27.632 [2024-11-20 08:17:32.170922] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:27.632 [2024-11-20 08:17:32.170928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:20:27.632 [2024-11-20 08:17:32.170935] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:20:27.632 [2024-11-20 08:17:32.170939] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:27.633 [2024-11-20 08:17:32.170943] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:27.633 [2024-11-20 08:17:32.170949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:27.633 [2024-11-20 08:17:32.170957] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:20:27.633 [2024-11-20 08:17:32.170961] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:20:27.633 [2024-11-20 08:17:32.170964] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:27.633 [2024-11-20 08:17:32.170970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:20:27.633 [2024-11-20 08:17:32.178867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:20:27.633 [2024-11-20 08:17:32.178882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:20:27.633 [2024-11-20 08:17:32.178893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:20:27.633 [2024-11-20 08:17:32.178900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:20:27.633 ===================================================== 00:20:27.633 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:27.633 ===================================================== 00:20:27.633 Controller Capabilities/Features 00:20:27.633 ================================ 00:20:27.633 Vendor ID: 4e58 00:20:27.633 Subsystem Vendor ID: 4e58 00:20:27.633 Serial Number: SPDK2 00:20:27.633 Model Number: SPDK bdev Controller 00:20:27.633 Firmware Version: 25.01 00:20:27.633 Recommended Arb Burst: 6 00:20:27.633 IEEE OUI Identifier: 8d 6b 50 00:20:27.633 Multi-path I/O 00:20:27.633 May have multiple subsystem ports: Yes 00:20:27.633 May have multiple controllers: Yes 00:20:27.633 Associated with SR-IOV VF: No 00:20:27.633 Max Data Transfer Size: 131072 00:20:27.633 Max Number of Namespaces: 32 00:20:27.633 Max Number of I/O Queues: 127 00:20:27.633 NVMe Specification Version (VS): 1.3 00:20:27.633 NVMe Specification Version (Identify): 1.3 00:20:27.633 Maximum Queue Entries: 256 00:20:27.633 Contiguous Queues Required: Yes 00:20:27.633 Arbitration Mechanisms Supported 00:20:27.633 Weighted Round Robin: Not Supported 00:20:27.633 Vendor Specific: Not Supported 00:20:27.633 Reset Timeout: 15000 ms 00:20:27.633 Doorbell Stride: 4 bytes 00:20:27.633 NVM Subsystem Reset: Not Supported 00:20:27.633 Command Sets Supported 00:20:27.633 NVM Command Set: Supported 00:20:27.633 Boot Partition: Not Supported 00:20:27.633 Memory Page Size Minimum: 4096 bytes 00:20:27.633 Memory Page Size Maximum: 4096 bytes 00:20:27.633 Persistent Memory Region: Not Supported 00:20:27.633 Optional Asynchronous Events Supported 00:20:27.633 Namespace Attribute Notices: Supported 00:20:27.633 Firmware Activation Notices: Not Supported 00:20:27.633 ANA Change Notices: Not Supported 00:20:27.633 PLE Aggregate Log Change Notices: Not Supported 00:20:27.633 LBA Status Info Alert Notices: Not Supported 00:20:27.633 EGE Aggregate Log Change Notices: Not Supported 00:20:27.633 Normal NVM Subsystem Shutdown event: Not Supported 00:20:27.633 Zone Descriptor Change Notices: Not Supported 00:20:27.633 Discovery Log Change Notices: Not Supported 00:20:27.633 Controller Attributes 00:20:27.633 128-bit Host Identifier: Supported 00:20:27.633 Non-Operational Permissive Mode: Not Supported 00:20:27.633 NVM Sets: Not Supported 00:20:27.633 Read Recovery Levels: Not Supported 00:20:27.633 Endurance Groups: Not Supported 00:20:27.633 Predictable Latency Mode: Not Supported 00:20:27.633 Traffic Based Keep ALive: Not Supported 00:20:27.633 Namespace Granularity: Not Supported 00:20:27.633 SQ Associations: Not Supported 00:20:27.633 UUID List: Not Supported 00:20:27.633 Multi-Domain Subsystem: Not Supported 00:20:27.633 Fixed Capacity Management: Not Supported 00:20:27.633 Variable Capacity Management: Not Supported 00:20:27.633 Delete Endurance Group: Not Supported 00:20:27.633 Delete NVM Set: Not Supported 00:20:27.633 Extended LBA Formats Supported: Not Supported 00:20:27.633 Flexible Data Placement Supported: Not Supported 00:20:27.633 00:20:27.633 Controller Memory Buffer Support 00:20:27.633 ================================ 00:20:27.633 Supported: No 00:20:27.633 00:20:27.633 Persistent Memory Region Support 00:20:27.633 ================================ 00:20:27.633 Supported: No 00:20:27.633 00:20:27.633 Admin Command Set Attributes 00:20:27.633 ============================ 00:20:27.633 Security Send/Receive: Not Supported 00:20:27.633 Format NVM: Not Supported 00:20:27.633 Firmware Activate/Download: Not Supported 00:20:27.633 Namespace Management: Not Supported 00:20:27.633 Device Self-Test: Not Supported 00:20:27.633 Directives: Not Supported 00:20:27.633 NVMe-MI: Not Supported 00:20:27.633 Virtualization Management: Not Supported 00:20:27.633 Doorbell Buffer Config: Not Supported 00:20:27.633 Get LBA Status Capability: Not Supported 00:20:27.633 Command & Feature Lockdown Capability: Not Supported 00:20:27.633 Abort Command Limit: 4 00:20:27.633 Async Event Request Limit: 4 00:20:27.633 Number of Firmware Slots: N/A 00:20:27.633 Firmware Slot 1 Read-Only: N/A 00:20:27.633 Firmware Activation Without Reset: N/A 00:20:27.633 Multiple Update Detection Support: N/A 00:20:27.633 Firmware Update Granularity: No Information Provided 00:20:27.633 Per-Namespace SMART Log: No 00:20:27.633 Asymmetric Namespace Access Log Page: Not Supported 00:20:27.633 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:20:27.633 Command Effects Log Page: Supported 00:20:27.633 Get Log Page Extended Data: Supported 00:20:27.633 Telemetry Log Pages: Not Supported 00:20:27.633 Persistent Event Log Pages: Not Supported 00:20:27.633 Supported Log Pages Log Page: May Support 00:20:27.633 Commands Supported & Effects Log Page: Not Supported 00:20:27.633 Feature Identifiers & Effects Log Page:May Support 00:20:27.633 NVMe-MI Commands & Effects Log Page: May Support 00:20:27.633 Data Area 4 for Telemetry Log: Not Supported 00:20:27.633 Error Log Page Entries Supported: 128 00:20:27.633 Keep Alive: Supported 00:20:27.633 Keep Alive Granularity: 10000 ms 00:20:27.633 00:20:27.633 NVM Command Set Attributes 00:20:27.633 ========================== 00:20:27.633 Submission Queue Entry Size 00:20:27.633 Max: 64 00:20:27.633 Min: 64 00:20:27.633 Completion Queue Entry Size 00:20:27.633 Max: 16 00:20:27.633 Min: 16 00:20:27.633 Number of Namespaces: 32 00:20:27.633 Compare Command: Supported 00:20:27.633 Write Uncorrectable Command: Not Supported 00:20:27.633 Dataset Management Command: Supported 00:20:27.633 Write Zeroes Command: Supported 00:20:27.633 Set Features Save Field: Not Supported 00:20:27.633 Reservations: Not Supported 00:20:27.633 Timestamp: Not Supported 00:20:27.633 Copy: Supported 00:20:27.633 Volatile Write Cache: Present 00:20:27.633 Atomic Write Unit (Normal): 1 00:20:27.633 Atomic Write Unit (PFail): 1 00:20:27.634 Atomic Compare & Write Unit: 1 00:20:27.634 Fused Compare & Write: Supported 00:20:27.634 Scatter-Gather List 00:20:27.634 SGL Command Set: Supported (Dword aligned) 00:20:27.634 SGL Keyed: Not Supported 00:20:27.634 SGL Bit Bucket Descriptor: Not Supported 00:20:27.634 SGL Metadata Pointer: Not Supported 00:20:27.634 Oversized SGL: Not Supported 00:20:27.634 SGL Metadata Address: Not Supported 00:20:27.634 SGL Offset: Not Supported 00:20:27.634 Transport SGL Data Block: Not Supported 00:20:27.634 Replay Protected Memory Block: Not Supported 00:20:27.634 00:20:27.634 Firmware Slot Information 00:20:27.634 ========================= 00:20:27.634 Active slot: 1 00:20:27.634 Slot 1 Firmware Revision: 25.01 00:20:27.634 00:20:27.634 00:20:27.634 Commands Supported and Effects 00:20:27.634 ============================== 00:20:27.634 Admin Commands 00:20:27.634 -------------- 00:20:27.634 Get Log Page (02h): Supported 00:20:27.634 Identify (06h): Supported 00:20:27.634 Abort (08h): Supported 00:20:27.634 Set Features (09h): Supported 00:20:27.634 Get Features (0Ah): Supported 00:20:27.634 Asynchronous Event Request (0Ch): Supported 00:20:27.634 Keep Alive (18h): Supported 00:20:27.634 I/O Commands 00:20:27.634 ------------ 00:20:27.634 Flush (00h): Supported LBA-Change 00:20:27.634 Write (01h): Supported LBA-Change 00:20:27.634 Read (02h): Supported 00:20:27.634 Compare (05h): Supported 00:20:27.634 Write Zeroes (08h): Supported LBA-Change 00:20:27.634 Dataset Management (09h): Supported LBA-Change 00:20:27.634 Copy (19h): Supported LBA-Change 00:20:27.634 00:20:27.634 Error Log 00:20:27.634 ========= 00:20:27.634 00:20:27.634 Arbitration 00:20:27.634 =========== 00:20:27.634 Arbitration Burst: 1 00:20:27.634 00:20:27.634 Power Management 00:20:27.634 ================ 00:20:27.634 Number of Power States: 1 00:20:27.634 Current Power State: Power State #0 00:20:27.634 Power State #0: 00:20:27.634 Max Power: 0.00 W 00:20:27.634 Non-Operational State: Operational 00:20:27.634 Entry Latency: Not Reported 00:20:27.634 Exit Latency: Not Reported 00:20:27.634 Relative Read Throughput: 0 00:20:27.634 Relative Read Latency: 0 00:20:27.634 Relative Write Throughput: 0 00:20:27.634 Relative Write Latency: 0 00:20:27.634 Idle Power: Not Reported 00:20:27.634 Active Power: Not Reported 00:20:27.634 Non-Operational Permissive Mode: Not Supported 00:20:27.634 00:20:27.634 Health Information 00:20:27.634 ================== 00:20:27.634 Critical Warnings: 00:20:27.634 Available Spare Space: OK 00:20:27.634 Temperature: OK 00:20:27.634 Device Reliability: OK 00:20:27.634 Read Only: No 00:20:27.634 Volatile Memory Backup: OK 00:20:27.634 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:27.634 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:27.634 Available Spare: 0% 00:20:27.634 Available Sp[2024-11-20 08:17:32.179000] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:20:27.634 [2024-11-20 08:17:32.186869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:20:27.634 [2024-11-20 08:17:32.186900] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:20:27.634 [2024-11-20 08:17:32.186910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.634 [2024-11-20 08:17:32.186919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.634 [2024-11-20 08:17:32.186925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.634 [2024-11-20 08:17:32.186932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.634 [2024-11-20 08:17:32.186987] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:20:27.634 [2024-11-20 08:17:32.186998] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:20:27.634 [2024-11-20 08:17:32.187992] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:27.634 [2024-11-20 08:17:32.188041] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:20:27.634 [2024-11-20 08:17:32.188048] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:20:27.634 [2024-11-20 08:17:32.188996] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:20:27.634 [2024-11-20 08:17:32.189008] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:20:27.634 [2024-11-20 08:17:32.189057] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:20:27.634 [2024-11-20 08:17:32.190433] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:27.634 are Threshold: 0% 00:20:27.634 Life Percentage Used: 0% 00:20:27.634 Data Units Read: 0 00:20:27.634 Data Units Written: 0 00:20:27.634 Host Read Commands: 0 00:20:27.634 Host Write Commands: 0 00:20:27.634 Controller Busy Time: 0 minutes 00:20:27.634 Power Cycles: 0 00:20:27.634 Power On Hours: 0 hours 00:20:27.634 Unsafe Shutdowns: 0 00:20:27.634 Unrecoverable Media Errors: 0 00:20:27.634 Lifetime Error Log Entries: 0 00:20:27.634 Warning Temperature Time: 0 minutes 00:20:27.634 Critical Temperature Time: 0 minutes 00:20:27.634 00:20:27.634 Number of Queues 00:20:27.634 ================ 00:20:27.634 Number of I/O Submission Queues: 127 00:20:27.634 Number of I/O Completion Queues: 127 00:20:27.634 00:20:27.634 Active Namespaces 00:20:27.634 ================= 00:20:27.634 Namespace ID:1 00:20:27.634 Error Recovery Timeout: Unlimited 00:20:27.634 Command Set Identifier: NVM (00h) 00:20:27.634 Deallocate: Supported 00:20:27.634 Deallocated/Unwritten Error: Not Supported 00:20:27.634 Deallocated Read Value: Unknown 00:20:27.634 Deallocate in Write Zeroes: Not Supported 00:20:27.634 Deallocated Guard Field: 0xFFFF 00:20:27.634 Flush: Supported 00:20:27.634 Reservation: Supported 00:20:27.634 Namespace Sharing Capabilities: Multiple Controllers 00:20:27.634 Size (in LBAs): 131072 (0GiB) 00:20:27.634 Capacity (in LBAs): 131072 (0GiB) 00:20:27.634 Utilization (in LBAs): 131072 (0GiB) 00:20:27.634 NGUID: 3E39C87A600945D7807796CE9FEB5A22 00:20:27.634 UUID: 3e39c87a-6009-45d7-8077-96ce9feb5a22 00:20:27.634 Thin Provisioning: Not Supported 00:20:27.634 Per-NS Atomic Units: Yes 00:20:27.634 Atomic Boundary Size (Normal): 0 00:20:27.634 Atomic Boundary Size (PFail): 0 00:20:27.634 Atomic Boundary Offset: 0 00:20:27.634 Maximum Single Source Range Length: 65535 00:20:27.634 Maximum Copy Length: 65535 00:20:27.634 Maximum Source Range Count: 1 00:20:27.634 NGUID/EUI64 Never Reused: No 00:20:27.634 Namespace Write Protected: No 00:20:27.634 Number of LBA Formats: 1 00:20:27.634 Current LBA Format: LBA Format #00 00:20:27.634 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:27.634 00:20:27.634 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:20:27.896 [2024-11-20 08:17:32.383944] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:33.183 Initializing NVMe Controllers 00:20:33.183 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:33.183 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:33.183 Initialization complete. Launching workers. 00:20:33.183 ======================================================== 00:20:33.183 Latency(us) 00:20:33.183 Device Information : IOPS MiB/s Average min max 00:20:33.183 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40150.90 156.84 3187.65 841.86 6930.42 00:20:33.183 ======================================================== 00:20:33.183 Total : 40150.90 156.84 3187.65 841.86 6930.42 00:20:33.183 00:20:33.183 [2024-11-20 08:17:37.490064] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:33.183 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:33.183 [2024-11-20 08:17:37.682658] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:38.467 Initializing NVMe Controllers 00:20:38.467 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:38.467 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:38.467 Initialization complete. Launching workers. 00:20:38.467 ======================================================== 00:20:38.467 Latency(us) 00:20:38.467 Device Information : IOPS MiB/s Average min max 00:20:38.467 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35005.56 136.74 3655.81 1106.31 9854.99 00:20:38.467 ======================================================== 00:20:38.467 Total : 35005.56 136.74 3655.81 1106.31 9854.99 00:20:38.467 00:20:38.467 [2024-11-20 08:17:42.701724] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:38.467 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:38.467 [2024-11-20 08:17:42.906917] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:43.751 [2024-11-20 08:17:48.053948] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:43.751 Initializing NVMe Controllers 00:20:43.751 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:43.751 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:43.751 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:20:43.751 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:20:43.751 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:20:43.751 Initialization complete. Launching workers. 00:20:43.751 Starting thread on core 2 00:20:43.751 Starting thread on core 3 00:20:43.751 Starting thread on core 1 00:20:43.751 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:20:43.751 [2024-11-20 08:17:48.343491] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:47.051 [2024-11-20 08:17:51.393374] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:47.051 Initializing NVMe Controllers 00:20:47.051 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:47.051 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:47.051 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:20:47.051 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:20:47.051 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:20:47.051 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:20:47.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:47.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:47.051 Initialization complete. Launching workers. 00:20:47.051 Starting thread on core 1 with urgent priority queue 00:20:47.051 Starting thread on core 2 with urgent priority queue 00:20:47.051 Starting thread on core 3 with urgent priority queue 00:20:47.051 Starting thread on core 0 with urgent priority queue 00:20:47.051 SPDK bdev Controller (SPDK2 ) core 0: 14091.67 IO/s 7.10 secs/100000 ios 00:20:47.051 SPDK bdev Controller (SPDK2 ) core 1: 10955.00 IO/s 9.13 secs/100000 ios 00:20:47.051 SPDK bdev Controller (SPDK2 ) core 2: 16453.67 IO/s 6.08 secs/100000 ios 00:20:47.051 SPDK bdev Controller (SPDK2 ) core 3: 12294.00 IO/s 8.13 secs/100000 ios 00:20:47.051 ======================================================== 00:20:47.051 00:20:47.051 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:47.051 [2024-11-20 08:17:51.692323] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:47.051 Initializing NVMe Controllers 00:20:47.051 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:47.051 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:47.051 Namespace ID: 1 size: 0GB 00:20:47.051 Initialization complete. 00:20:47.051 INFO: using host memory buffer for IO 00:20:47.051 Hello world! 00:20:47.051 [2024-11-20 08:17:51.702379] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:47.051 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:47.312 [2024-11-20 08:17:51.996016] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:48.697 Initializing NVMe Controllers 00:20:48.697 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:48.697 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:48.697 Initialization complete. Launching workers. 00:20:48.697 submit (in ns) avg, min, max = 9119.1, 3898.3, 4000180.0 00:20:48.697 complete (in ns) avg, min, max = 18275.5, 2380.8, 3999545.8 00:20:48.697 00:20:48.697 Submit histogram 00:20:48.697 ================ 00:20:48.697 Range in us Cumulative Count 00:20:48.697 3.893 - 3.920: 0.5854% ( 111) 00:20:48.697 3.920 - 3.947: 4.0188% ( 651) 00:20:48.697 3.947 - 3.973: 11.2652% ( 1374) 00:20:48.697 3.973 - 4.000: 22.4197% ( 2115) 00:20:48.697 4.000 - 4.027: 35.0509% ( 2395) 00:20:48.697 4.027 - 4.053: 47.1019% ( 2285) 00:20:48.697 4.053 - 4.080: 63.8468% ( 3175) 00:20:48.697 4.080 - 4.107: 79.9008% ( 3044) 00:20:48.697 4.107 - 4.133: 90.9709% ( 2099) 00:20:48.697 4.133 - 4.160: 96.8092% ( 1107) 00:20:48.697 4.160 - 4.187: 98.7817% ( 374) 00:20:48.697 4.187 - 4.213: 99.3408% ( 106) 00:20:48.697 4.213 - 4.240: 99.4146% ( 14) 00:20:48.697 4.240 - 4.267: 99.4304% ( 3) 00:20:48.697 4.267 - 4.293: 99.4357% ( 1) 00:20:48.697 4.320 - 4.347: 99.4462% ( 2) 00:20:48.697 4.480 - 4.507: 99.4515% ( 1) 00:20:48.697 4.613 - 4.640: 99.4568% ( 1) 00:20:48.697 4.667 - 4.693: 99.4621% ( 1) 00:20:48.697 4.693 - 4.720: 99.4673% ( 1) 00:20:48.697 4.773 - 4.800: 99.4726% ( 1) 00:20:48.697 4.880 - 4.907: 99.4779% ( 1) 00:20:48.697 4.960 - 4.987: 99.4831% ( 1) 00:20:48.697 4.987 - 5.013: 99.4884% ( 1) 00:20:48.697 5.013 - 5.040: 99.4937% ( 1) 00:20:48.697 5.040 - 5.067: 99.4990% ( 1) 00:20:48.697 5.440 - 5.467: 99.5042% ( 1) 00:20:48.697 5.680 - 5.707: 99.5095% ( 1) 00:20:48.697 5.733 - 5.760: 99.5148% ( 1) 00:20:48.697 5.787 - 5.813: 99.5201% ( 1) 00:20:48.697 5.973 - 6.000: 99.5359% ( 3) 00:20:48.697 6.000 - 6.027: 99.5464% ( 2) 00:20:48.697 6.027 - 6.053: 99.5517% ( 1) 00:20:48.697 6.053 - 6.080: 99.5623% ( 2) 00:20:48.697 6.080 - 6.107: 99.5675% ( 1) 00:20:48.697 6.107 - 6.133: 99.5834% ( 3) 00:20:48.697 6.133 - 6.160: 99.5886% ( 1) 00:20:48.697 6.160 - 6.187: 99.5992% ( 2) 00:20:48.697 6.267 - 6.293: 99.6097% ( 2) 00:20:48.697 6.320 - 6.347: 99.6150% ( 1) 00:20:48.698 6.347 - 6.373: 99.6255% ( 2) 00:20:48.698 6.400 - 6.427: 99.6308% ( 1) 00:20:48.698 6.427 - 6.453: 99.6361% ( 1) 00:20:48.698 6.480 - 6.507: 99.6519% ( 3) 00:20:48.698 6.533 - 6.560: 99.6625% ( 2) 00:20:48.698 6.587 - 6.613: 99.6677% ( 1) 00:20:48.698 6.693 - 6.720: 99.6730% ( 1) 00:20:48.698 6.800 - 6.827: 99.6836% ( 2) 00:20:48.698 6.827 - 6.880: 99.6941% ( 2) 00:20:48.698 6.880 - 6.933: 99.7047% ( 2) 00:20:48.698 6.933 - 6.987: 99.7099% ( 1) 00:20:48.698 6.987 - 7.040: 99.7205% ( 2) 00:20:48.698 7.040 - 7.093: 99.7310% ( 2) 00:20:48.698 7.093 - 7.147: 99.7363% ( 1) 00:20:48.698 7.147 - 7.200: 99.7574% ( 4) 00:20:48.698 7.200 - 7.253: 99.7785% ( 4) 00:20:48.698 7.253 - 7.307: 99.7838% ( 1) 00:20:48.698 7.307 - 7.360: 99.7943% ( 2) 00:20:48.698 7.360 - 7.413: 99.8049% ( 2) 00:20:48.698 7.413 - 7.467: 99.8101% ( 1) 00:20:48.698 7.467 - 7.520: 99.8260% ( 3) 00:20:48.698 7.520 - 7.573: 99.8418% ( 3) 00:20:48.698 7.573 - 7.627: 99.8471% ( 1) 00:20:48.698 7.627 - 7.680: 99.8523% ( 1) 00:20:48.698 8.160 - 8.213: 99.8576% ( 1) 00:20:48.698 8.587 - 8.640: 99.8629% ( 1) 00:20:48.698 8.800 - 8.853: 99.8682% ( 1) 00:20:48.698 9.493 - 9.547: 99.8734% ( 1) 00:20:48.698 3986.773 - 4014.080: 100.0000% ( 24) 00:20:48.698 00:20:48.698 Complete histogram 00:20:48.698 ================== 00:20:48.698 Range in us Cumulative Count 00:20:48.698 2.373 - 2.387: 0.0053% ( 1) 00:20:48.698 2.387 - 2.400: 0.3323% ( 62) 00:20:48.698 2.400 - 2.413: 1.0601% ( 138) 00:20:48.698 2.413 - 2.427: 1.1392% ( 15) 00:20:48.698 2.427 - 2.440: 1.3765% ( 45) 00:20:48.698 2.440 - 2.453: 45.7149% ( 8407) 00:20:48.698 2.453 - 2.467: 56.8957% ( 2120) 00:20:48.698 2.467 - 2.480: 69.5586% ( 2401) 00:20:48.698 2.480 - [2024-11-20 08:17:53.091504] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:48.698 2.493: 77.2006% ( 1449) 00:20:48.698 2.493 - 2.507: 81.0242% ( 725) 00:20:48.698 2.507 - 2.520: 84.7054% ( 698) 00:20:48.698 2.520 - 2.533: 90.1746% ( 1037) 00:20:48.698 2.533 - 2.547: 94.4043% ( 802) 00:20:48.698 2.547 - 2.560: 96.5877% ( 414) 00:20:48.698 2.560 - 2.573: 98.1963% ( 305) 00:20:48.698 2.573 - 2.587: 99.0981% ( 171) 00:20:48.698 2.587 - 2.600: 99.3197% ( 42) 00:20:48.698 2.600 - 2.613: 99.3618% ( 8) 00:20:48.698 2.613 - 2.627: 99.3671% ( 1) 00:20:48.698 2.813 - 2.827: 99.3724% ( 1) 00:20:48.698 4.293 - 4.320: 99.3882% ( 3) 00:20:48.698 4.347 - 4.373: 99.3935% ( 1) 00:20:48.698 4.400 - 4.427: 99.4040% ( 2) 00:20:48.698 4.427 - 4.453: 99.4093% ( 1) 00:20:48.698 4.480 - 4.507: 99.4304% ( 4) 00:20:48.698 4.693 - 4.720: 99.4357% ( 1) 00:20:48.698 4.747 - 4.773: 99.4515% ( 3) 00:20:48.698 4.773 - 4.800: 99.4568% ( 1) 00:20:48.698 4.800 - 4.827: 99.4673% ( 2) 00:20:48.698 4.933 - 4.960: 99.4726% ( 1) 00:20:48.698 5.013 - 5.040: 99.4779% ( 1) 00:20:48.698 5.040 - 5.067: 99.4831% ( 1) 00:20:48.698 5.093 - 5.120: 99.4884% ( 1) 00:20:48.698 5.120 - 5.147: 99.4990% ( 2) 00:20:48.698 5.147 - 5.173: 99.5042% ( 1) 00:20:48.698 5.200 - 5.227: 99.5095% ( 1) 00:20:48.698 5.307 - 5.333: 99.5148% ( 1) 00:20:48.698 5.413 - 5.440: 99.5253% ( 2) 00:20:48.698 5.440 - 5.467: 99.5359% ( 2) 00:20:48.698 5.467 - 5.493: 99.5412% ( 1) 00:20:48.698 5.493 - 5.520: 99.5517% ( 2) 00:20:48.698 5.547 - 5.573: 99.5570% ( 1) 00:20:48.698 5.893 - 5.920: 99.5623% ( 1) 00:20:48.698 5.947 - 5.973: 99.5675% ( 1) 00:20:48.698 6.053 - 6.080: 99.5728% ( 1) 00:20:48.698 6.080 - 6.107: 99.5781% ( 1) 00:20:48.698 6.107 - 6.133: 99.5834% ( 1) 00:20:48.698 6.187 - 6.213: 99.5886% ( 1) 00:20:48.698 9.387 - 9.440: 99.5939% ( 1) 00:20:48.698 10.453 - 10.507: 99.5992% ( 1) 00:20:48.698 11.893 - 11.947: 99.6045% ( 1) 00:20:48.698 3904.853 - 3932.160: 99.6097% ( 1) 00:20:48.698 3986.773 - 4014.080: 100.0000% ( 74) 00:20:48.698 00:20:48.698 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:20:48.698 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:48.698 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:20:48.698 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:20:48.698 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:48.698 [ 00:20:48.698 { 00:20:48.698 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:48.698 "subtype": "Discovery", 00:20:48.698 "listen_addresses": [], 00:20:48.698 "allow_any_host": true, 00:20:48.698 "hosts": [] 00:20:48.698 }, 00:20:48.698 { 00:20:48.698 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:48.698 "subtype": "NVMe", 00:20:48.698 "listen_addresses": [ 00:20:48.698 { 00:20:48.698 "trtype": "VFIOUSER", 00:20:48.698 "adrfam": "IPv4", 00:20:48.698 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:48.698 "trsvcid": "0" 00:20:48.698 } 00:20:48.698 ], 00:20:48.698 "allow_any_host": true, 00:20:48.698 "hosts": [], 00:20:48.698 "serial_number": "SPDK1", 00:20:48.698 "model_number": "SPDK bdev Controller", 00:20:48.698 "max_namespaces": 32, 00:20:48.698 "min_cntlid": 1, 00:20:48.698 "max_cntlid": 65519, 00:20:48.698 "namespaces": [ 00:20:48.698 { 00:20:48.698 "nsid": 1, 00:20:48.698 "bdev_name": "Malloc1", 00:20:48.698 "name": "Malloc1", 00:20:48.698 "nguid": "152473F23D424E95BF97AEFE954C605E", 00:20:48.698 "uuid": "152473f2-3d42-4e95-bf97-aefe954c605e" 00:20:48.698 }, 00:20:48.698 { 00:20:48.698 "nsid": 2, 00:20:48.699 "bdev_name": "Malloc3", 00:20:48.699 "name": "Malloc3", 00:20:48.699 "nguid": "3EF254DD9B3A40DC9E828B3F8A8A6005", 00:20:48.699 "uuid": "3ef254dd-9b3a-40dc-9e82-8b3f8a8a6005" 00:20:48.699 } 00:20:48.699 ] 00:20:48.699 }, 00:20:48.699 { 00:20:48.699 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:48.699 "subtype": "NVMe", 00:20:48.699 "listen_addresses": [ 00:20:48.699 { 00:20:48.699 "trtype": "VFIOUSER", 00:20:48.699 "adrfam": "IPv4", 00:20:48.699 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:48.699 "trsvcid": "0" 00:20:48.699 } 00:20:48.699 ], 00:20:48.699 "allow_any_host": true, 00:20:48.699 "hosts": [], 00:20:48.699 "serial_number": "SPDK2", 00:20:48.699 "model_number": "SPDK bdev Controller", 00:20:48.699 "max_namespaces": 32, 00:20:48.699 "min_cntlid": 1, 00:20:48.699 "max_cntlid": 65519, 00:20:48.699 "namespaces": [ 00:20:48.699 { 00:20:48.699 "nsid": 1, 00:20:48.699 "bdev_name": "Malloc2", 00:20:48.699 "name": "Malloc2", 00:20:48.699 "nguid": "3E39C87A600945D7807796CE9FEB5A22", 00:20:48.699 "uuid": "3e39c87a-6009-45d7-8077-96ce9feb5a22" 00:20:48.699 } 00:20:48.699 ] 00:20:48.699 } 00:20:48.699 ] 00:20:48.699 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:48.699 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:20:48.699 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1962554 00:20:48.699 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:48.699 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:20:48.699 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:48.699 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:48.699 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:20:48.699 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:48.699 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:20:48.959 Malloc4 00:20:48.959 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:20:48.959 [2024-11-20 08:17:53.523871] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:48.959 [2024-11-20 08:17:53.680001] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:49.220 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:49.220 Asynchronous Event Request test 00:20:49.220 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:49.220 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:49.220 Registering asynchronous event callbacks... 00:20:49.220 Starting namespace attribute notice tests for all controllers... 00:20:49.220 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:49.220 aer_cb - Changed Namespace 00:20:49.220 Cleaning up... 00:20:49.220 [ 00:20:49.220 { 00:20:49.220 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:49.220 "subtype": "Discovery", 00:20:49.220 "listen_addresses": [], 00:20:49.220 "allow_any_host": true, 00:20:49.220 "hosts": [] 00:20:49.220 }, 00:20:49.220 { 00:20:49.220 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:49.220 "subtype": "NVMe", 00:20:49.221 "listen_addresses": [ 00:20:49.221 { 00:20:49.221 "trtype": "VFIOUSER", 00:20:49.221 "adrfam": "IPv4", 00:20:49.221 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:49.221 "trsvcid": "0" 00:20:49.221 } 00:20:49.221 ], 00:20:49.221 "allow_any_host": true, 00:20:49.221 "hosts": [], 00:20:49.221 "serial_number": "SPDK1", 00:20:49.221 "model_number": "SPDK bdev Controller", 00:20:49.221 "max_namespaces": 32, 00:20:49.221 "min_cntlid": 1, 00:20:49.221 "max_cntlid": 65519, 00:20:49.221 "namespaces": [ 00:20:49.221 { 00:20:49.221 "nsid": 1, 00:20:49.221 "bdev_name": "Malloc1", 00:20:49.221 "name": "Malloc1", 00:20:49.221 "nguid": "152473F23D424E95BF97AEFE954C605E", 00:20:49.221 "uuid": "152473f2-3d42-4e95-bf97-aefe954c605e" 00:20:49.221 }, 00:20:49.221 { 00:20:49.221 "nsid": 2, 00:20:49.221 "bdev_name": "Malloc3", 00:20:49.221 "name": "Malloc3", 00:20:49.221 "nguid": "3EF254DD9B3A40DC9E828B3F8A8A6005", 00:20:49.221 "uuid": "3ef254dd-9b3a-40dc-9e82-8b3f8a8a6005" 00:20:49.221 } 00:20:49.221 ] 00:20:49.221 }, 00:20:49.221 { 00:20:49.221 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:49.221 "subtype": "NVMe", 00:20:49.221 "listen_addresses": [ 00:20:49.221 { 00:20:49.221 "trtype": "VFIOUSER", 00:20:49.221 "adrfam": "IPv4", 00:20:49.221 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:49.221 "trsvcid": "0" 00:20:49.221 } 00:20:49.221 ], 00:20:49.221 "allow_any_host": true, 00:20:49.221 "hosts": [], 00:20:49.221 "serial_number": "SPDK2", 00:20:49.221 "model_number": "SPDK bdev Controller", 00:20:49.221 "max_namespaces": 32, 00:20:49.221 "min_cntlid": 1, 00:20:49.221 "max_cntlid": 65519, 00:20:49.221 "namespaces": [ 00:20:49.221 { 00:20:49.221 "nsid": 1, 00:20:49.221 "bdev_name": "Malloc2", 00:20:49.221 "name": "Malloc2", 00:20:49.221 "nguid": "3E39C87A600945D7807796CE9FEB5A22", 00:20:49.221 "uuid": "3e39c87a-6009-45d7-8077-96ce9feb5a22" 00:20:49.221 }, 00:20:49.221 { 00:20:49.221 "nsid": 2, 00:20:49.221 "bdev_name": "Malloc4", 00:20:49.221 "name": "Malloc4", 00:20:49.221 "nguid": "AF263797014B4454B382445781A0EF21", 00:20:49.221 "uuid": "af263797-014b-4454-b382-445781a0ef21" 00:20:49.221 } 00:20:49.221 ] 00:20:49.221 } 00:20:49.221 ] 00:20:49.221 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1962554 00:20:49.221 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:20:49.221 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1953463 00:20:49.221 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1953463 ']' 00:20:49.221 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1953463 00:20:49.221 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:20:49.221 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.221 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1953463 00:20:49.482 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:49.482 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:49.482 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1953463' 00:20:49.482 killing process with pid 1953463 00:20:49.482 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1953463 00:20:49.482 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1953463 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1962780 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1962780' 00:20:49.482 Process pid: 1962780 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1962780 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1962780 ']' 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.482 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:49.482 [2024-11-20 08:17:54.171799] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:20:49.482 [2024-11-20 08:17:54.172748] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:20:49.482 [2024-11-20 08:17:54.172787] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.743 [2024-11-20 08:17:54.252699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:49.743 [2024-11-20 08:17:54.288305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.743 [2024-11-20 08:17:54.288340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.743 [2024-11-20 08:17:54.288347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.743 [2024-11-20 08:17:54.288357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.743 [2024-11-20 08:17:54.288363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.743 [2024-11-20 08:17:54.289966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.743 [2024-11-20 08:17:54.290082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.743 [2024-11-20 08:17:54.290234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.743 [2024-11-20 08:17:54.290235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:49.743 [2024-11-20 08:17:54.345126] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:20:49.743 [2024-11-20 08:17:54.345279] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:20:49.743 [2024-11-20 08:17:54.346296] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:20:49.743 [2024-11-20 08:17:54.346892] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:20:49.743 [2024-11-20 08:17:54.346983] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:20:50.314 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.314 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:20:50.314 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:51.257 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:20:51.518 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:51.518 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:51.518 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:51.518 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:51.518 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:51.779 Malloc1 00:20:51.779 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:52.040 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:52.040 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:52.301 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:52.301 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:52.301 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:52.561 Malloc2 00:20:52.561 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:52.823 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:52.823 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:53.085 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:20:53.085 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1962780 00:20:53.085 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1962780 ']' 00:20:53.085 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1962780 00:20:53.085 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:20:53.085 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.085 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1962780 00:20:53.085 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:53.085 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:53.085 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1962780' 00:20:53.085 killing process with pid 1962780 00:20:53.085 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1962780 00:20:53.085 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1962780 00:20:53.346 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:53.346 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:53.346 00:20:53.346 real 0m51.349s 00:20:53.346 user 3m16.919s 00:20:53.346 sys 0m2.744s 00:20:53.346 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.346 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:53.346 ************************************ 00:20:53.346 END TEST nvmf_vfio_user 00:20:53.346 ************************************ 00:20:53.346 08:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:53.346 08:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:53.346 08:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.346 08:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:53.346 ************************************ 00:20:53.346 START TEST nvmf_vfio_user_nvme_compliance 00:20:53.346 ************************************ 00:20:53.346 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:53.346 * Looking for test storage... 00:20:53.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:20:53.346 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:53.346 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:20:53.346 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:53.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.608 --rc genhtml_branch_coverage=1 00:20:53.608 --rc genhtml_function_coverage=1 00:20:53.608 --rc genhtml_legend=1 00:20:53.608 --rc geninfo_all_blocks=1 00:20:53.608 --rc geninfo_unexecuted_blocks=1 00:20:53.608 00:20:53.608 ' 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:53.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.608 --rc genhtml_branch_coverage=1 00:20:53.608 --rc genhtml_function_coverage=1 00:20:53.608 --rc genhtml_legend=1 00:20:53.608 --rc geninfo_all_blocks=1 00:20:53.608 --rc geninfo_unexecuted_blocks=1 00:20:53.608 00:20:53.608 ' 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:53.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.608 --rc genhtml_branch_coverage=1 00:20:53.608 --rc genhtml_function_coverage=1 00:20:53.608 --rc genhtml_legend=1 00:20:53.608 --rc geninfo_all_blocks=1 00:20:53.608 --rc geninfo_unexecuted_blocks=1 00:20:53.608 00:20:53.608 ' 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:53.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.608 --rc genhtml_branch_coverage=1 00:20:53.608 --rc genhtml_function_coverage=1 00:20:53.608 --rc genhtml_legend=1 00:20:53.608 --rc geninfo_all_blocks=1 00:20:53.608 --rc geninfo_unexecuted_blocks=1 00:20:53.608 00:20:53.608 ' 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:20:53.608 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@50 -- # : 0 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:53.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1963646 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1963646' 00:20:53.609 Process pid: 1963646 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1963646 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1963646 ']' 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.609 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:53.609 [2024-11-20 08:17:58.252171] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:20:53.609 [2024-11-20 08:17:58.252250] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.869 [2024-11-20 08:17:58.335893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:53.869 [2024-11-20 08:17:58.376816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.869 [2024-11-20 08:17:58.376853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.869 [2024-11-20 08:17:58.376866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.869 [2024-11-20 08:17:58.376874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.869 [2024-11-20 08:17:58.376880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.869 [2024-11-20 08:17:58.378295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.869 [2024-11-20 08:17:58.378413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.869 [2024-11-20 08:17:58.378415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.441 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.441 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:20:54.441 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:20:55.385 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:55.385 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:20:55.385 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:55.385 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.385 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:55.385 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.385 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:20:55.385 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:55.385 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.385 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:55.385 malloc0 00:20:55.385 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.385 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:20:55.385 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.385 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:55.645 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.645 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:55.645 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.645 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:55.645 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.645 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:55.645 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.645 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:55.645 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.645 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:20:55.645 00:20:55.645 00:20:55.645 CUnit - A unit testing framework for C - Version 2.1-3 00:20:55.645 http://cunit.sourceforge.net/ 00:20:55.645 00:20:55.645 00:20:55.645 Suite: nvme_compliance 00:20:55.645 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 08:18:00.347307] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:55.646 [2024-11-20 08:18:00.348663] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:20:55.646 [2024-11-20 08:18:00.348674] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:20:55.646 [2024-11-20 08:18:00.348678] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:20:55.646 [2024-11-20 08:18:00.350323] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:55.906 passed 00:20:55.906 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 08:18:00.448954] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:55.906 [2024-11-20 08:18:00.451973] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:55.906 passed 00:20:55.906 Test: admin_identify_ns ...[2024-11-20 08:18:00.552187] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:55.906 [2024-11-20 08:18:00.611874] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:20:55.906 [2024-11-20 08:18:00.619872] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:20:56.166 [2024-11-20 08:18:00.640980] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:56.166 passed 00:20:56.166 Test: admin_get_features_mandatory_features ...[2024-11-20 08:18:00.735406] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:56.166 [2024-11-20 08:18:00.738428] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:56.166 passed 00:20:56.166 Test: admin_get_features_optional_features ...[2024-11-20 08:18:00.836990] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:56.166 [2024-11-20 08:18:00.840010] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:56.166 passed 00:20:56.427 Test: admin_set_features_number_of_queues ...[2024-11-20 08:18:00.938174] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:56.427 [2024-11-20 08:18:01.042975] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:56.427 passed 00:20:56.427 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 08:18:01.136611] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:56.427 [2024-11-20 08:18:01.139630] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:56.687 passed 00:20:56.687 Test: admin_get_log_page_with_lpo ...[2024-11-20 08:18:01.235110] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:56.687 [2024-11-20 08:18:01.306877] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:20:56.687 [2024-11-20 08:18:01.319920] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:56.687 passed 00:20:56.947 Test: fabric_property_get ...[2024-11-20 08:18:01.415532] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:56.947 [2024-11-20 08:18:01.416787] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:20:56.947 [2024-11-20 08:18:01.418554] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:56.947 passed 00:20:56.947 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 08:18:01.514175] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:56.947 [2024-11-20 08:18:01.515421] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:20:56.947 [2024-11-20 08:18:01.519205] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:56.947 passed 00:20:56.947 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 08:18:01.614111] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:57.207 [2024-11-20 08:18:01.701872] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:57.207 [2024-11-20 08:18:01.717878] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:57.207 [2024-11-20 08:18:01.722948] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:57.207 passed 00:20:57.207 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 08:18:01.814553] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:57.208 [2024-11-20 08:18:01.815799] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:20:57.208 [2024-11-20 08:18:01.817571] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:57.208 passed 00:20:57.208 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 08:18:01.914655] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:57.468 [2024-11-20 08:18:01.989877] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:57.468 [2024-11-20 08:18:02.013869] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:57.468 [2024-11-20 08:18:02.018949] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:57.468 passed 00:20:57.468 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 08:18:02.110939] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:57.468 [2024-11-20 08:18:02.112189] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:20:57.468 [2024-11-20 08:18:02.112208] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:20:57.468 [2024-11-20 08:18:02.113955] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:57.468 passed 00:20:57.728 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 08:18:02.209048] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:57.729 [2024-11-20 08:18:02.304880] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:20:57.729 [2024-11-20 08:18:02.312879] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:20:57.729 [2024-11-20 08:18:02.320868] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:20:57.729 [2024-11-20 08:18:02.328867] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:20:57.729 [2024-11-20 08:18:02.357945] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:57.729 passed 00:20:57.729 Test: admin_create_io_sq_verify_pc ...[2024-11-20 08:18:02.449545] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:57.989 [2024-11-20 08:18:02.465877] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:20:57.989 [2024-11-20 08:18:02.483704] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:57.989 passed 00:20:57.989 Test: admin_create_io_qp_max_qps ...[2024-11-20 08:18:02.577206] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:59.371 [2024-11-20 08:18:03.680873] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:20:59.371 [2024-11-20 08:18:04.064004] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:59.631 passed 00:20:59.631 Test: admin_create_io_sq_shared_cq ...[2024-11-20 08:18:04.156121] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:59.631 [2024-11-20 08:18:04.291872] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:59.631 [2024-11-20 08:18:04.328938] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:59.892 passed 00:20:59.892 00:20:59.892 Run Summary: Type Total Ran Passed Failed Inactive 00:20:59.892 suites 1 1 n/a 0 0 00:20:59.892 tests 18 18 18 0 0 00:20:59.892 asserts 360 360 360 0 n/a 00:20:59.892 00:20:59.892 Elapsed time = 1.672 seconds 00:20:59.892 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1963646 00:20:59.892 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1963646 ']' 00:20:59.892 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1963646 00:20:59.892 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:20:59.892 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.892 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1963646 00:20:59.892 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.892 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.892 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1963646' 00:20:59.892 killing process with pid 1963646 00:20:59.892 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1963646 00:20:59.892 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1963646 00:20:59.892 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:20:59.892 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:59.892 00:20:59.892 real 0m6.627s 00:20:59.892 user 0m18.798s 00:20:59.892 sys 0m0.543s 00:20:59.892 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.893 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:59.893 ************************************ 00:20:59.893 END TEST nvmf_vfio_user_nvme_compliance 00:20:59.893 ************************************ 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:00.153 ************************************ 00:21:00.153 START TEST nvmf_vfio_user_fuzz 00:21:00.153 ************************************ 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:21:00.153 * Looking for test storage... 00:21:00.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.153 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:00.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.154 --rc genhtml_branch_coverage=1 00:21:00.154 --rc genhtml_function_coverage=1 00:21:00.154 --rc genhtml_legend=1 00:21:00.154 --rc geninfo_all_blocks=1 00:21:00.154 --rc geninfo_unexecuted_blocks=1 00:21:00.154 00:21:00.154 ' 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:00.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.154 --rc genhtml_branch_coverage=1 00:21:00.154 --rc genhtml_function_coverage=1 00:21:00.154 --rc genhtml_legend=1 00:21:00.154 --rc geninfo_all_blocks=1 00:21:00.154 --rc geninfo_unexecuted_blocks=1 00:21:00.154 00:21:00.154 ' 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:00.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.154 --rc genhtml_branch_coverage=1 00:21:00.154 --rc genhtml_function_coverage=1 00:21:00.154 --rc genhtml_legend=1 00:21:00.154 --rc geninfo_all_blocks=1 00:21:00.154 --rc geninfo_unexecuted_blocks=1 00:21:00.154 00:21:00.154 ' 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:00.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.154 --rc genhtml_branch_coverage=1 00:21:00.154 --rc genhtml_function_coverage=1 00:21:00.154 --rc genhtml_legend=1 00:21:00.154 --rc geninfo_all_blocks=1 00:21:00.154 --rc geninfo_unexecuted_blocks=1 00:21:00.154 00:21:00.154 ' 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:00.154 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@50 -- # : 0 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:00.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1965160 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1965160' 00:21:00.420 Process pid: 1965160 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1965160 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1965160 ']' 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.420 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.421 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.421 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.421 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:01.064 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.064 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:21:01.064 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:02.458 malloc0 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:21:02.458 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:21:34.584 Fuzzing completed. Shutting down the fuzz application 00:21:34.584 00:21:34.584 Dumping successful admin opcodes: 00:21:34.584 8, 9, 10, 24, 00:21:34.584 Dumping successful io opcodes: 00:21:34.584 0, 00:21:34.584 NS: 0x20000081ef00 I/O qp, Total commands completed: 1210966, total successful commands: 4751, random_seed: 552057792 00:21:34.584 NS: 0x20000081ef00 admin qp, Total commands completed: 152174, total successful commands: 1226, random_seed: 1487350976 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1965160 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1965160 ']' 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1965160 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1965160 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1965160' 00:21:34.584 killing process with pid 1965160 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1965160 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1965160 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:21:34.584 00:21:34.584 real 0m33.803s 00:21:34.584 user 0m40.134s 00:21:34.584 sys 0m24.443s 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:34.584 ************************************ 00:21:34.584 END TEST nvmf_vfio_user_fuzz 00:21:34.584 ************************************ 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.584 ************************************ 00:21:34.584 START TEST nvmf_auth_target 00:21:34.584 ************************************ 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:34.584 * Looking for test storage... 00:21:34.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:34.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.584 --rc genhtml_branch_coverage=1 00:21:34.584 --rc genhtml_function_coverage=1 00:21:34.584 --rc genhtml_legend=1 00:21:34.584 --rc geninfo_all_blocks=1 00:21:34.584 --rc geninfo_unexecuted_blocks=1 00:21:34.584 00:21:34.584 ' 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:34.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.584 --rc genhtml_branch_coverage=1 00:21:34.584 --rc genhtml_function_coverage=1 00:21:34.584 --rc genhtml_legend=1 00:21:34.584 --rc geninfo_all_blocks=1 00:21:34.584 --rc geninfo_unexecuted_blocks=1 00:21:34.584 00:21:34.584 ' 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:34.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.584 --rc genhtml_branch_coverage=1 00:21:34.584 --rc genhtml_function_coverage=1 00:21:34.584 --rc genhtml_legend=1 00:21:34.584 --rc geninfo_all_blocks=1 00:21:34.584 --rc geninfo_unexecuted_blocks=1 00:21:34.584 00:21:34.584 ' 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:34.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.584 --rc genhtml_branch_coverage=1 00:21:34.584 --rc genhtml_function_coverage=1 00:21:34.584 --rc genhtml_legend=1 00:21:34.584 --rc geninfo_all_blocks=1 00:21:34.584 --rc geninfo_unexecuted_blocks=1 00:21:34.584 00:21:34.584 ' 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:34.584 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@50 -- # : 0 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # remove_target_ns 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # xtrace_disable 00:21:34.585 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # pci_devs=() 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # net_devs=() 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # e810=() 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # local -ga e810 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # x722=() 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # local -ga x722 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # mlx=() 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # local -ga mlx 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:42.732 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:42.732 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:42.732 Found net devices under 0000:31:00.0: cvl_0_0 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:42.732 Found net devices under 0000:31:00.1: cvl_0_1 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # is_hw=yes 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@247 -- # create_target_ns 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:42.732 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@28 -- # local -g _dev 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772161 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:21:42.733 10.0.0.1 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772162 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:21:42.733 10.0.0.2 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:21:42.733 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:42.733 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:42.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.631 ms 00:21:42.734 00:21:42.734 --- 10.0.0.1 ping statistics --- 00:21:42.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.734 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:21:42.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:21:42.734 00:21:42.734 --- 10.0.0.2 ping statistics --- 00:21:42.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.734 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # return 0 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # return 1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev= 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@160 -- # return 0 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # return 1 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev= 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@160 -- # return 0 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:42.734 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.735 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=1976299 00:21:42.735 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 1976299 00:21:42.735 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1976299 ']' 00:21:42.735 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.735 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.735 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.735 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.735 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.735 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:43.679 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1976521 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=null 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=dbeadfb7c65edfbfcfd9462c297b2ac969e7e96376cadf1c 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.GLw 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key dbeadfb7c65edfbfcfd9462c297b2ac969e7e96376cadf1c 0 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 dbeadfb7c65edfbfcfd9462c297b2ac969e7e96376cadf1c 0 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=dbeadfb7c65edfbfcfd9462c297b2ac969e7e96376cadf1c 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=0 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.GLw 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.GLw 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.GLw 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=04b0586b180fb88a146e5f21bd9e2f63cff46dc9aa436ebbc7e9afd07580ad22 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.zif 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 04b0586b180fb88a146e5f21bd9e2f63cff46dc9aa436ebbc7e9afd07580ad22 3 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 04b0586b180fb88a146e5f21bd9e2f63cff46dc9aa436ebbc7e9afd07580ad22 3 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=04b0586b180fb88a146e5f21bd9e2f63cff46dc9aa436ebbc7e9afd07580ad22 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.zif 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.zif 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.zif 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=c2150c078dae2c1270bdb37f75303af0 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.gHq 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key c2150c078dae2c1270bdb37f75303af0 1 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 c2150c078dae2c1270bdb37f75303af0 1 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=c2150c078dae2c1270bdb37f75303af0 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.gHq 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.gHq 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.gHq 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=e9262ea6e535562d2530fd1aae8c6cf1ce21ccda68fbfa51 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.LBZ 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key e9262ea6e535562d2530fd1aae8c6cf1ce21ccda68fbfa51 2 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 e9262ea6e535562d2530fd1aae8c6cf1ce21ccda68fbfa51 2 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=e9262ea6e535562d2530fd1aae8c6cf1ce21ccda68fbfa51 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.LBZ 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.LBZ 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.LBZ 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=f77098891ccc6e50a767b5fba83a59c3fb61ad99ac95a11a 00:21:43.680 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:21:43.681 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.Fol 00:21:43.681 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key f77098891ccc6e50a767b5fba83a59c3fb61ad99ac95a11a 2 00:21:43.681 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 f77098891ccc6e50a767b5fba83a59c3fb61ad99ac95a11a 2 00:21:43.681 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:43.681 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:43.681 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=f77098891ccc6e50a767b5fba83a59c3fb61ad99ac95a11a 00:21:43.681 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:21:43.681 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.Fol 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.Fol 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Fol 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=3c5951270fde12e4f4e0017108fa4eef 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.Bhi 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 3c5951270fde12e4f4e0017108fa4eef 1 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 3c5951270fde12e4f4e0017108fa4eef 1 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=3c5951270fde12e4f4e0017108fa4eef 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.Bhi 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.Bhi 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Bhi 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=e24e2e532354213192d2383c0ade24aa936f903762bb37ffbe7256bb5f5b9beb 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.ZUH 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key e24e2e532354213192d2383c0ade24aa936f903762bb37ffbe7256bb5f5b9beb 3 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 e24e2e532354213192d2383c0ade24aa936f903762bb37ffbe7256bb5f5b9beb 3 00:21:43.941 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:43.942 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:43.942 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=e24e2e532354213192d2383c0ade24aa936f903762bb37ffbe7256bb5f5b9beb 00:21:43.942 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:21:43.942 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:43.942 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.ZUH 00:21:43.942 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.ZUH 00:21:43.942 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.ZUH 00:21:43.942 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:43.942 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1976299 00:21:43.942 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1976299 ']' 00:21:43.942 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.942 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.942 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.942 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.942 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1976521 /var/tmp/host.sock 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1976521 ']' 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:44.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GLw 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.202 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.464 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.464 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.GLw 00:21:44.464 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.GLw 00:21:44.464 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.zif ]] 00:21:44.464 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zif 00:21:44.464 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.464 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.464 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.464 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zif 00:21:44.464 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zif 00:21:44.724 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:44.724 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.gHq 00:21:44.724 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.724 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.724 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.724 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.gHq 00:21:44.724 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.gHq 00:21:44.984 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.LBZ ]] 00:21:44.984 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LBZ 00:21:44.984 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.984 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.984 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.984 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LBZ 00:21:44.984 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LBZ 00:21:44.984 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:44.984 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Fol 00:21:44.984 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.984 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.984 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.984 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Fol 00:21:44.984 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Fol 00:21:45.246 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Bhi ]] 00:21:45.246 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Bhi 00:21:45.246 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.246 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.246 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.246 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Bhi 00:21:45.246 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Bhi 00:21:45.507 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:45.507 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ZUH 00:21:45.507 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.507 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.507 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.507 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ZUH 00:21:45.507 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ZUH 00:21:45.507 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:21:45.507 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:45.507 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:45.507 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.507 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:45.507 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:45.769 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:21:45.769 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.769 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:45.769 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:45.769 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:45.769 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.769 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.769 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.769 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.769 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.769 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.769 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.770 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.031 00:21:46.031 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.032 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.032 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.032 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.032 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.032 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.032 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.032 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.032 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.032 { 00:21:46.032 "cntlid": 1, 00:21:46.032 "qid": 0, 00:21:46.032 "state": "enabled", 00:21:46.032 "thread": "nvmf_tgt_poll_group_000", 00:21:46.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:46.032 "listen_address": { 00:21:46.032 "trtype": "TCP", 00:21:46.032 "adrfam": "IPv4", 00:21:46.032 "traddr": "10.0.0.2", 00:21:46.032 "trsvcid": "4420" 00:21:46.032 }, 00:21:46.032 "peer_address": { 00:21:46.032 "trtype": "TCP", 00:21:46.032 "adrfam": "IPv4", 00:21:46.032 "traddr": "10.0.0.1", 00:21:46.032 "trsvcid": "46272" 00:21:46.032 }, 00:21:46.032 "auth": { 00:21:46.032 "state": "completed", 00:21:46.032 "digest": "sha256", 00:21:46.032 "dhgroup": "null" 00:21:46.032 } 00:21:46.032 } 00:21:46.032 ]' 00:21:46.032 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.293 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:46.293 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.293 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:46.293 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.293 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.293 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.293 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.554 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:21:46.555 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:21:47.129 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.129 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.129 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.129 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.129 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.129 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.129 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:47.129 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:47.391 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:47.391 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.391 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:47.391 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:47.391 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:47.391 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.391 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.391 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.391 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.391 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.391 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.391 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.391 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.652 00:21:47.652 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.652 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.652 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.914 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.914 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.914 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.914 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.914 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.914 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.914 { 00:21:47.914 "cntlid": 3, 00:21:47.914 "qid": 0, 00:21:47.914 "state": "enabled", 00:21:47.914 "thread": "nvmf_tgt_poll_group_000", 00:21:47.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:47.914 "listen_address": { 00:21:47.914 "trtype": "TCP", 00:21:47.914 "adrfam": "IPv4", 00:21:47.914 "traddr": "10.0.0.2", 00:21:47.914 "trsvcid": "4420" 00:21:47.914 }, 00:21:47.914 "peer_address": { 00:21:47.914 "trtype": "TCP", 00:21:47.914 "adrfam": "IPv4", 00:21:47.914 "traddr": "10.0.0.1", 00:21:47.914 "trsvcid": "50114" 00:21:47.914 }, 00:21:47.914 "auth": { 00:21:47.914 "state": "completed", 00:21:47.914 "digest": "sha256", 00:21:47.914 "dhgroup": "null" 00:21:47.914 } 00:21:47.914 } 00:21:47.914 ]' 00:21:47.914 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.914 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:47.914 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.914 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:47.914 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.914 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.914 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.914 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.176 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:21:48.176 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.119 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.381 00:21:49.381 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.381 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.381 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.381 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.381 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.381 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.381 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.381 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.381 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.381 { 00:21:49.381 "cntlid": 5, 00:21:49.381 "qid": 0, 00:21:49.381 "state": "enabled", 00:21:49.381 "thread": "nvmf_tgt_poll_group_000", 00:21:49.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:49.381 "listen_address": { 00:21:49.381 "trtype": "TCP", 00:21:49.381 "adrfam": "IPv4", 00:21:49.381 "traddr": "10.0.0.2", 00:21:49.381 "trsvcid": "4420" 00:21:49.381 }, 00:21:49.381 "peer_address": { 00:21:49.381 "trtype": "TCP", 00:21:49.381 "adrfam": "IPv4", 00:21:49.381 "traddr": "10.0.0.1", 00:21:49.381 "trsvcid": "50150" 00:21:49.381 }, 00:21:49.381 "auth": { 00:21:49.381 "state": "completed", 00:21:49.381 "digest": "sha256", 00:21:49.381 "dhgroup": "null" 00:21:49.381 } 00:21:49.381 } 00:21:49.381 ]' 00:21:49.381 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.642 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:49.642 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.642 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:49.642 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.642 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.642 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.642 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.903 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:21:49.903 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:21:50.475 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.475 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:50.475 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.475 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.475 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.475 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.475 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:50.475 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:50.736 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:50.736 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.736 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:50.736 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:50.736 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:50.736 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.736 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:50.736 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.736 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.736 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.736 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:50.736 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.736 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.997 00:21:50.997 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.997 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.997 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.997 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.997 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.997 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.997 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.997 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.997 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.997 { 00:21:50.997 "cntlid": 7, 00:21:50.997 "qid": 0, 00:21:50.997 "state": "enabled", 00:21:50.997 "thread": "nvmf_tgt_poll_group_000", 00:21:50.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:50.997 "listen_address": { 00:21:50.997 "trtype": "TCP", 00:21:50.997 "adrfam": "IPv4", 00:21:50.997 "traddr": "10.0.0.2", 00:21:50.997 "trsvcid": "4420" 00:21:50.997 }, 00:21:50.997 "peer_address": { 00:21:50.997 "trtype": "TCP", 00:21:50.997 "adrfam": "IPv4", 00:21:50.997 "traddr": "10.0.0.1", 00:21:50.997 "trsvcid": "50192" 00:21:50.997 }, 00:21:50.997 "auth": { 00:21:50.997 "state": "completed", 00:21:50.997 "digest": "sha256", 00:21:50.997 "dhgroup": "null" 00:21:50.997 } 00:21:50.997 } 00:21:50.997 ]' 00:21:50.997 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.257 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:51.257 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.257 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:51.257 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.257 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.257 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.257 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.518 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:21:51.518 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:21:52.090 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.090 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:52.090 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.090 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.090 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.090 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:52.090 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.090 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:52.090 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:52.350 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:52.350 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.350 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:52.350 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:52.350 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:52.350 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.350 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.350 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.350 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.350 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.350 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.351 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.351 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.612 00:21:52.612 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.612 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.612 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.874 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.874 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.874 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.874 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.874 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.874 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.874 { 00:21:52.874 "cntlid": 9, 00:21:52.874 "qid": 0, 00:21:52.874 "state": "enabled", 00:21:52.874 "thread": "nvmf_tgt_poll_group_000", 00:21:52.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:52.874 "listen_address": { 00:21:52.874 "trtype": "TCP", 00:21:52.874 "adrfam": "IPv4", 00:21:52.874 "traddr": "10.0.0.2", 00:21:52.874 "trsvcid": "4420" 00:21:52.874 }, 00:21:52.874 "peer_address": { 00:21:52.874 "trtype": "TCP", 00:21:52.874 "adrfam": "IPv4", 00:21:52.874 "traddr": "10.0.0.1", 00:21:52.874 "trsvcid": "50210" 00:21:52.874 }, 00:21:52.874 "auth": { 00:21:52.874 "state": "completed", 00:21:52.874 "digest": "sha256", 00:21:52.874 "dhgroup": "ffdhe2048" 00:21:52.874 } 00:21:52.874 } 00:21:52.874 ]' 00:21:52.874 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.874 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:52.874 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.874 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:52.874 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.874 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.874 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.874 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.135 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:21:53.135 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:21:54.078 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.078 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:54.078 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.078 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.078 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.078 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.078 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:54.078 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:54.078 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:54.078 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.078 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:54.078 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:54.079 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:54.079 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.079 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.079 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.079 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.079 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.079 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.079 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.079 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.339 00:21:54.339 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.339 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.339 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.339 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.339 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.339 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.339 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.339 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.339 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.339 { 00:21:54.339 "cntlid": 11, 00:21:54.339 "qid": 0, 00:21:54.339 "state": "enabled", 00:21:54.339 "thread": "nvmf_tgt_poll_group_000", 00:21:54.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:54.339 "listen_address": { 00:21:54.339 "trtype": "TCP", 00:21:54.339 "adrfam": "IPv4", 00:21:54.339 "traddr": "10.0.0.2", 00:21:54.339 "trsvcid": "4420" 00:21:54.339 }, 00:21:54.339 "peer_address": { 00:21:54.339 "trtype": "TCP", 00:21:54.339 "adrfam": "IPv4", 00:21:54.339 "traddr": "10.0.0.1", 00:21:54.339 "trsvcid": "50228" 00:21:54.339 }, 00:21:54.339 "auth": { 00:21:54.339 "state": "completed", 00:21:54.339 "digest": "sha256", 00:21:54.339 "dhgroup": "ffdhe2048" 00:21:54.339 } 00:21:54.340 } 00:21:54.340 ]' 00:21:54.340 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.600 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:54.600 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.600 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:54.600 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.600 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.600 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.600 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.861 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:21:54.861 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:21:55.433 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.433 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:55.433 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.433 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.433 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.433 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.433 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:55.433 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:55.695 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:55.695 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.695 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:55.695 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:55.695 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:55.695 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.695 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.695 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.695 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.695 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.695 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.695 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.695 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.955 00:21:55.955 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.955 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.955 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.215 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.215 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.215 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.215 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.215 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.215 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.215 { 00:21:56.215 "cntlid": 13, 00:21:56.215 "qid": 0, 00:21:56.215 "state": "enabled", 00:21:56.215 "thread": "nvmf_tgt_poll_group_000", 00:21:56.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:56.215 "listen_address": { 00:21:56.215 "trtype": "TCP", 00:21:56.215 "adrfam": "IPv4", 00:21:56.215 "traddr": "10.0.0.2", 00:21:56.215 "trsvcid": "4420" 00:21:56.215 }, 00:21:56.215 "peer_address": { 00:21:56.215 "trtype": "TCP", 00:21:56.215 "adrfam": "IPv4", 00:21:56.215 "traddr": "10.0.0.1", 00:21:56.215 "trsvcid": "50256" 00:21:56.215 }, 00:21:56.215 "auth": { 00:21:56.215 "state": "completed", 00:21:56.215 "digest": "sha256", 00:21:56.215 "dhgroup": "ffdhe2048" 00:21:56.215 } 00:21:56.215 } 00:21:56.215 ]' 00:21:56.215 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.215 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:56.215 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.215 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:56.215 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.215 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.215 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.215 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.476 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:21:56.476 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:21:57.419 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.419 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:57.419 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.419 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.419 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.419 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.419 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:57.419 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:57.419 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:57.419 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.419 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:57.419 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:57.419 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:57.419 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.419 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:57.420 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.420 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.420 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.420 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:57.420 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.420 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.681 00:21:57.681 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.681 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.682 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.682 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.682 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.682 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.682 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.943 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.943 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.943 { 00:21:57.943 "cntlid": 15, 00:21:57.943 "qid": 0, 00:21:57.943 "state": "enabled", 00:21:57.943 "thread": "nvmf_tgt_poll_group_000", 00:21:57.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:57.943 "listen_address": { 00:21:57.943 "trtype": "TCP", 00:21:57.943 "adrfam": "IPv4", 00:21:57.943 "traddr": "10.0.0.2", 00:21:57.943 "trsvcid": "4420" 00:21:57.943 }, 00:21:57.943 "peer_address": { 00:21:57.943 "trtype": "TCP", 00:21:57.943 "adrfam": "IPv4", 00:21:57.943 "traddr": "10.0.0.1", 00:21:57.943 "trsvcid": "46628" 00:21:57.943 }, 00:21:57.943 "auth": { 00:21:57.943 "state": "completed", 00:21:57.943 "digest": "sha256", 00:21:57.943 "dhgroup": "ffdhe2048" 00:21:57.943 } 00:21:57.943 } 00:21:57.943 ]' 00:21:57.943 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.943 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:57.943 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.943 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:57.943 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.943 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.943 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.943 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.204 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:21:58.204 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:21:58.776 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.776 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:58.776 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.776 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.776 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.776 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:58.776 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.776 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:58.776 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:59.037 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:59.037 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.037 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:59.037 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:59.037 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:59.037 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.037 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.037 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.037 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.037 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.037 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.037 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.037 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.298 00:21:59.298 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.298 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.298 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.298 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.298 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.298 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.298 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.298 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.298 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.298 { 00:21:59.298 "cntlid": 17, 00:21:59.298 "qid": 0, 00:21:59.298 "state": "enabled", 00:21:59.298 "thread": "nvmf_tgt_poll_group_000", 00:21:59.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:59.298 "listen_address": { 00:21:59.298 "trtype": "TCP", 00:21:59.298 "adrfam": "IPv4", 00:21:59.298 "traddr": "10.0.0.2", 00:21:59.298 "trsvcid": "4420" 00:21:59.298 }, 00:21:59.298 "peer_address": { 00:21:59.298 "trtype": "TCP", 00:21:59.298 "adrfam": "IPv4", 00:21:59.298 "traddr": "10.0.0.1", 00:21:59.298 "trsvcid": "46658" 00:21:59.299 }, 00:21:59.299 "auth": { 00:21:59.299 "state": "completed", 00:21:59.299 "digest": "sha256", 00:21:59.299 "dhgroup": "ffdhe3072" 00:21:59.299 } 00:21:59.299 } 00:21:59.299 ]' 00:21:59.299 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.559 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:59.559 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.559 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:59.559 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.559 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.559 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.559 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.820 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:21:59.820 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:00.391 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.391 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:00.391 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.391 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.391 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.391 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.391 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:00.391 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:00.651 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:22:00.651 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.651 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:00.651 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:00.651 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:00.651 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.651 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.651 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.651 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.651 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.651 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.651 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.651 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.912 00:22:00.912 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.912 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.912 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.912 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.912 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.912 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.912 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.912 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.912 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.912 { 00:22:00.912 "cntlid": 19, 00:22:00.912 "qid": 0, 00:22:00.912 "state": "enabled", 00:22:00.912 "thread": "nvmf_tgt_poll_group_000", 00:22:00.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:00.912 "listen_address": { 00:22:00.912 "trtype": "TCP", 00:22:00.912 "adrfam": "IPv4", 00:22:00.912 "traddr": "10.0.0.2", 00:22:00.912 "trsvcid": "4420" 00:22:00.912 }, 00:22:00.912 "peer_address": { 00:22:00.912 "trtype": "TCP", 00:22:00.912 "adrfam": "IPv4", 00:22:00.912 "traddr": "10.0.0.1", 00:22:00.912 "trsvcid": "46694" 00:22:00.912 }, 00:22:00.912 "auth": { 00:22:00.912 "state": "completed", 00:22:00.912 "digest": "sha256", 00:22:00.912 "dhgroup": "ffdhe3072" 00:22:00.912 } 00:22:00.912 } 00:22:00.912 ]' 00:22:00.912 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.173 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:01.173 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.173 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:01.173 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.173 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.173 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.173 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.434 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:01.434 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:02.005 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.005 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:02.005 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.005 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.005 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.005 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.005 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:02.005 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:02.265 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:22:02.265 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.265 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:02.265 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:02.265 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:02.265 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.265 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.265 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.265 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.265 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.265 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.266 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.266 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.526 00:22:02.526 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.526 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.526 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.787 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.787 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.787 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.788 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.788 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.788 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.788 { 00:22:02.788 "cntlid": 21, 00:22:02.788 "qid": 0, 00:22:02.788 "state": "enabled", 00:22:02.788 "thread": "nvmf_tgt_poll_group_000", 00:22:02.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:02.788 "listen_address": { 00:22:02.788 "trtype": "TCP", 00:22:02.788 "adrfam": "IPv4", 00:22:02.788 "traddr": "10.0.0.2", 00:22:02.788 "trsvcid": "4420" 00:22:02.788 }, 00:22:02.788 "peer_address": { 00:22:02.788 "trtype": "TCP", 00:22:02.788 "adrfam": "IPv4", 00:22:02.788 "traddr": "10.0.0.1", 00:22:02.788 "trsvcid": "46722" 00:22:02.788 }, 00:22:02.788 "auth": { 00:22:02.788 "state": "completed", 00:22:02.788 "digest": "sha256", 00:22:02.788 "dhgroup": "ffdhe3072" 00:22:02.788 } 00:22:02.788 } 00:22:02.788 ]' 00:22:02.788 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.788 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:02.788 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.788 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:02.788 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.788 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.788 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.788 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.048 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:03.048 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:03.619 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.619 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:03.619 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.619 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.619 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.619 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.619 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:03.619 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:03.880 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:22:03.881 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.881 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:03.881 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:03.881 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:03.881 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.881 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:03.881 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.881 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.881 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.881 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:03.881 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.881 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.141 00:22:04.141 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.141 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.141 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.402 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.402 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.402 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.402 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.402 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.402 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.402 { 00:22:04.402 "cntlid": 23, 00:22:04.402 "qid": 0, 00:22:04.402 "state": "enabled", 00:22:04.402 "thread": "nvmf_tgt_poll_group_000", 00:22:04.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:04.402 "listen_address": { 00:22:04.402 "trtype": "TCP", 00:22:04.402 "adrfam": "IPv4", 00:22:04.402 "traddr": "10.0.0.2", 00:22:04.402 "trsvcid": "4420" 00:22:04.402 }, 00:22:04.402 "peer_address": { 00:22:04.402 "trtype": "TCP", 00:22:04.402 "adrfam": "IPv4", 00:22:04.402 "traddr": "10.0.0.1", 00:22:04.402 "trsvcid": "46746" 00:22:04.402 }, 00:22:04.402 "auth": { 00:22:04.402 "state": "completed", 00:22:04.402 "digest": "sha256", 00:22:04.402 "dhgroup": "ffdhe3072" 00:22:04.402 } 00:22:04.402 } 00:22:04.402 ]' 00:22:04.402 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.402 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:04.402 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.402 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:04.402 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.402 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.402 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.402 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.677 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:04.677 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:05.252 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.514 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:05.514 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.514 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.514 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.514 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:05.514 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.514 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:05.514 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:05.514 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:22:05.514 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.514 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:05.514 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:05.514 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:05.514 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.514 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.514 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.514 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.514 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.514 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.514 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.514 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.776 00:22:05.776 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.776 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.776 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.037 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.037 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.037 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.037 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.037 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.037 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.037 { 00:22:06.037 "cntlid": 25, 00:22:06.037 "qid": 0, 00:22:06.037 "state": "enabled", 00:22:06.037 "thread": "nvmf_tgt_poll_group_000", 00:22:06.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:06.037 "listen_address": { 00:22:06.037 "trtype": "TCP", 00:22:06.037 "adrfam": "IPv4", 00:22:06.037 "traddr": "10.0.0.2", 00:22:06.037 "trsvcid": "4420" 00:22:06.037 }, 00:22:06.037 "peer_address": { 00:22:06.037 "trtype": "TCP", 00:22:06.037 "adrfam": "IPv4", 00:22:06.037 "traddr": "10.0.0.1", 00:22:06.037 "trsvcid": "46784" 00:22:06.037 }, 00:22:06.037 "auth": { 00:22:06.037 "state": "completed", 00:22:06.037 "digest": "sha256", 00:22:06.037 "dhgroup": "ffdhe4096" 00:22:06.037 } 00:22:06.037 } 00:22:06.037 ]' 00:22:06.037 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.037 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:06.037 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.037 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:06.037 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.037 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.037 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.037 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.298 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:06.298 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.241 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.502 00:22:07.502 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.502 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.502 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.762 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.762 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.762 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.762 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.762 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.762 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.762 { 00:22:07.762 "cntlid": 27, 00:22:07.762 "qid": 0, 00:22:07.762 "state": "enabled", 00:22:07.762 "thread": "nvmf_tgt_poll_group_000", 00:22:07.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:07.762 "listen_address": { 00:22:07.762 "trtype": "TCP", 00:22:07.762 "adrfam": "IPv4", 00:22:07.762 "traddr": "10.0.0.2", 00:22:07.762 "trsvcid": "4420" 00:22:07.762 }, 00:22:07.762 "peer_address": { 00:22:07.762 "trtype": "TCP", 00:22:07.762 "adrfam": "IPv4", 00:22:07.762 "traddr": "10.0.0.1", 00:22:07.762 "trsvcid": "59012" 00:22:07.762 }, 00:22:07.762 "auth": { 00:22:07.762 "state": "completed", 00:22:07.762 "digest": "sha256", 00:22:07.762 "dhgroup": "ffdhe4096" 00:22:07.762 } 00:22:07.762 } 00:22:07.762 ]' 00:22:07.762 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.762 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:07.762 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.762 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:07.762 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.023 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.023 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.023 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.023 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:08.023 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.967 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.229 00:22:09.229 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.229 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.229 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.489 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.489 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.489 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.489 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.489 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.489 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.489 { 00:22:09.489 "cntlid": 29, 00:22:09.489 "qid": 0, 00:22:09.489 "state": "enabled", 00:22:09.489 "thread": "nvmf_tgt_poll_group_000", 00:22:09.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:09.489 "listen_address": { 00:22:09.489 "trtype": "TCP", 00:22:09.489 "adrfam": "IPv4", 00:22:09.489 "traddr": "10.0.0.2", 00:22:09.489 "trsvcid": "4420" 00:22:09.489 }, 00:22:09.489 "peer_address": { 00:22:09.489 "trtype": "TCP", 00:22:09.489 "adrfam": "IPv4", 00:22:09.489 "traddr": "10.0.0.1", 00:22:09.489 "trsvcid": "59040" 00:22:09.489 }, 00:22:09.489 "auth": { 00:22:09.489 "state": "completed", 00:22:09.489 "digest": "sha256", 00:22:09.489 "dhgroup": "ffdhe4096" 00:22:09.489 } 00:22:09.489 } 00:22:09.489 ]' 00:22:09.489 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.489 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:09.489 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.489 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:09.489 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.489 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.489 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.489 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.751 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:09.751 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.693 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.954 00:22:10.954 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.954 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.954 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.216 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.216 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.216 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.216 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.216 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.216 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.216 { 00:22:11.216 "cntlid": 31, 00:22:11.216 "qid": 0, 00:22:11.216 "state": "enabled", 00:22:11.216 "thread": "nvmf_tgt_poll_group_000", 00:22:11.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:11.216 "listen_address": { 00:22:11.216 "trtype": "TCP", 00:22:11.216 "adrfam": "IPv4", 00:22:11.216 "traddr": "10.0.0.2", 00:22:11.216 "trsvcid": "4420" 00:22:11.216 }, 00:22:11.216 "peer_address": { 00:22:11.216 "trtype": "TCP", 00:22:11.216 "adrfam": "IPv4", 00:22:11.216 "traddr": "10.0.0.1", 00:22:11.216 "trsvcid": "59078" 00:22:11.216 }, 00:22:11.216 "auth": { 00:22:11.216 "state": "completed", 00:22:11.216 "digest": "sha256", 00:22:11.216 "dhgroup": "ffdhe4096" 00:22:11.216 } 00:22:11.216 } 00:22:11.216 ]' 00:22:11.216 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.216 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:11.216 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.216 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:11.216 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.216 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.216 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.216 08:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.477 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:11.477 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:12.453 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.453 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:12.453 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.453 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.453 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.453 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:12.453 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.453 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:12.453 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:12.453 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:22:12.453 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.453 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:12.453 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:12.453 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:12.453 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.453 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.453 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.453 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.453 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.453 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.453 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.453 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.739 00:22:12.739 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.739 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.739 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.035 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.035 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.035 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.035 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.035 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.035 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.035 { 00:22:13.035 "cntlid": 33, 00:22:13.035 "qid": 0, 00:22:13.035 "state": "enabled", 00:22:13.035 "thread": "nvmf_tgt_poll_group_000", 00:22:13.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:13.035 "listen_address": { 00:22:13.035 "trtype": "TCP", 00:22:13.035 "adrfam": "IPv4", 00:22:13.035 "traddr": "10.0.0.2", 00:22:13.035 "trsvcid": "4420" 00:22:13.035 }, 00:22:13.035 "peer_address": { 00:22:13.035 "trtype": "TCP", 00:22:13.035 "adrfam": "IPv4", 00:22:13.035 "traddr": "10.0.0.1", 00:22:13.035 "trsvcid": "59100" 00:22:13.035 }, 00:22:13.035 "auth": { 00:22:13.035 "state": "completed", 00:22:13.035 "digest": "sha256", 00:22:13.035 "dhgroup": "ffdhe6144" 00:22:13.035 } 00:22:13.035 } 00:22:13.035 ]' 00:22:13.035 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.035 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:13.035 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.035 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:13.035 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.035 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.035 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.035 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.296 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:13.296 08:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:14.238 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.239 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.499 00:22:14.499 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.499 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.499 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.760 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.760 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.760 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.760 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.760 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.760 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.760 { 00:22:14.760 "cntlid": 35, 00:22:14.760 "qid": 0, 00:22:14.760 "state": "enabled", 00:22:14.760 "thread": "nvmf_tgt_poll_group_000", 00:22:14.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:14.760 "listen_address": { 00:22:14.760 "trtype": "TCP", 00:22:14.760 "adrfam": "IPv4", 00:22:14.760 "traddr": "10.0.0.2", 00:22:14.760 "trsvcid": "4420" 00:22:14.760 }, 00:22:14.760 "peer_address": { 00:22:14.760 "trtype": "TCP", 00:22:14.760 "adrfam": "IPv4", 00:22:14.760 "traddr": "10.0.0.1", 00:22:14.760 "trsvcid": "59124" 00:22:14.760 }, 00:22:14.760 "auth": { 00:22:14.760 "state": "completed", 00:22:14.760 "digest": "sha256", 00:22:14.760 "dhgroup": "ffdhe6144" 00:22:14.760 } 00:22:14.760 } 00:22:14.760 ]' 00:22:14.760 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.760 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:14.760 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.021 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:15.021 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.021 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.021 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.021 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.021 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:15.021 08:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.962 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.963 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.963 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.963 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.963 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.963 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.534 00:22:16.534 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.534 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.534 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.534 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.534 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.534 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.534 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.534 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.534 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.534 { 00:22:16.534 "cntlid": 37, 00:22:16.534 "qid": 0, 00:22:16.534 "state": "enabled", 00:22:16.534 "thread": "nvmf_tgt_poll_group_000", 00:22:16.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:16.534 "listen_address": { 00:22:16.534 "trtype": "TCP", 00:22:16.534 "adrfam": "IPv4", 00:22:16.534 "traddr": "10.0.0.2", 00:22:16.534 "trsvcid": "4420" 00:22:16.534 }, 00:22:16.534 "peer_address": { 00:22:16.534 "trtype": "TCP", 00:22:16.534 "adrfam": "IPv4", 00:22:16.534 "traddr": "10.0.0.1", 00:22:16.534 "trsvcid": "59150" 00:22:16.534 }, 00:22:16.534 "auth": { 00:22:16.534 "state": "completed", 00:22:16.534 "digest": "sha256", 00:22:16.534 "dhgroup": "ffdhe6144" 00:22:16.534 } 00:22:16.534 } 00:22:16.534 ]' 00:22:16.534 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.534 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:16.534 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.534 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:16.534 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.795 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.795 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.795 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.795 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:16.795 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.737 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.309 00:22:18.309 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.309 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.309 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.309 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.309 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.309 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.309 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.309 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.309 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.309 { 00:22:18.309 "cntlid": 39, 00:22:18.309 "qid": 0, 00:22:18.309 "state": "enabled", 00:22:18.309 "thread": "nvmf_tgt_poll_group_000", 00:22:18.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:18.309 "listen_address": { 00:22:18.309 "trtype": "TCP", 00:22:18.309 "adrfam": "IPv4", 00:22:18.309 "traddr": "10.0.0.2", 00:22:18.309 "trsvcid": "4420" 00:22:18.309 }, 00:22:18.309 "peer_address": { 00:22:18.309 "trtype": "TCP", 00:22:18.309 "adrfam": "IPv4", 00:22:18.309 "traddr": "10.0.0.1", 00:22:18.309 "trsvcid": "53284" 00:22:18.309 }, 00:22:18.309 "auth": { 00:22:18.309 "state": "completed", 00:22:18.309 "digest": "sha256", 00:22:18.309 "dhgroup": "ffdhe6144" 00:22:18.309 } 00:22:18.309 } 00:22:18.309 ]' 00:22:18.309 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.309 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:18.310 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.570 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:18.570 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.570 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.570 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.570 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.831 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:18.831 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:19.403 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.403 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:19.403 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.403 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.403 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.403 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:19.403 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.403 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:19.403 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:19.664 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:22:19.664 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.664 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:19.664 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:19.664 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:19.664 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.664 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.664 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.664 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.664 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.664 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.664 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.664 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.235 00:22:20.235 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.235 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.235 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.236 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.236 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.236 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.236 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.236 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.236 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.236 { 00:22:20.236 "cntlid": 41, 00:22:20.236 "qid": 0, 00:22:20.236 "state": "enabled", 00:22:20.236 "thread": "nvmf_tgt_poll_group_000", 00:22:20.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:20.236 "listen_address": { 00:22:20.236 "trtype": "TCP", 00:22:20.236 "adrfam": "IPv4", 00:22:20.236 "traddr": "10.0.0.2", 00:22:20.236 "trsvcid": "4420" 00:22:20.236 }, 00:22:20.236 "peer_address": { 00:22:20.236 "trtype": "TCP", 00:22:20.236 "adrfam": "IPv4", 00:22:20.236 "traddr": "10.0.0.1", 00:22:20.236 "trsvcid": "53298" 00:22:20.236 }, 00:22:20.236 "auth": { 00:22:20.236 "state": "completed", 00:22:20.236 "digest": "sha256", 00:22:20.236 "dhgroup": "ffdhe8192" 00:22:20.236 } 00:22:20.236 } 00:22:20.236 ]' 00:22:20.236 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.496 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:20.496 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.496 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:20.496 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.496 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.496 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.496 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.757 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:20.757 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:21.328 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.328 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:21.328 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.328 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.328 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.328 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.328 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:21.328 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:21.590 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:22:21.590 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.590 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:21.590 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:21.590 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:21.590 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.590 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.590 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.590 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.590 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.590 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.590 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.590 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.162 00:22:22.162 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.162 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.162 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.423 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.423 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.423 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.423 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.423 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.423 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.423 { 00:22:22.423 "cntlid": 43, 00:22:22.423 "qid": 0, 00:22:22.423 "state": "enabled", 00:22:22.423 "thread": "nvmf_tgt_poll_group_000", 00:22:22.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:22.424 "listen_address": { 00:22:22.424 "trtype": "TCP", 00:22:22.424 "adrfam": "IPv4", 00:22:22.424 "traddr": "10.0.0.2", 00:22:22.424 "trsvcid": "4420" 00:22:22.424 }, 00:22:22.424 "peer_address": { 00:22:22.424 "trtype": "TCP", 00:22:22.424 "adrfam": "IPv4", 00:22:22.424 "traddr": "10.0.0.1", 00:22:22.424 "trsvcid": "53326" 00:22:22.424 }, 00:22:22.424 "auth": { 00:22:22.424 "state": "completed", 00:22:22.424 "digest": "sha256", 00:22:22.424 "dhgroup": "ffdhe8192" 00:22:22.424 } 00:22:22.424 } 00:22:22.424 ]' 00:22:22.424 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.424 08:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:22.424 08:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.424 08:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:22.424 08:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.424 08:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.424 08:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.424 08:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.684 08:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:22.684 08:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.626 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.198 00:22:24.198 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.198 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.198 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.461 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.461 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.461 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.461 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.461 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.461 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.461 { 00:22:24.461 "cntlid": 45, 00:22:24.461 "qid": 0, 00:22:24.461 "state": "enabled", 00:22:24.461 "thread": "nvmf_tgt_poll_group_000", 00:22:24.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:24.461 "listen_address": { 00:22:24.461 "trtype": "TCP", 00:22:24.461 "adrfam": "IPv4", 00:22:24.461 "traddr": "10.0.0.2", 00:22:24.461 "trsvcid": "4420" 00:22:24.461 }, 00:22:24.461 "peer_address": { 00:22:24.461 "trtype": "TCP", 00:22:24.461 "adrfam": "IPv4", 00:22:24.461 "traddr": "10.0.0.1", 00:22:24.461 "trsvcid": "53352" 00:22:24.461 }, 00:22:24.461 "auth": { 00:22:24.461 "state": "completed", 00:22:24.461 "digest": "sha256", 00:22:24.461 "dhgroup": "ffdhe8192" 00:22:24.461 } 00:22:24.461 } 00:22:24.461 ]' 00:22:24.461 08:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.461 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:24.461 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.461 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:24.461 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.461 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.461 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.461 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.721 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:24.721 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.666 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.237 00:22:26.237 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.237 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.237 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.237 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.237 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.237 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.237 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.498 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.498 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.498 { 00:22:26.498 "cntlid": 47, 00:22:26.498 "qid": 0, 00:22:26.498 "state": "enabled", 00:22:26.498 "thread": "nvmf_tgt_poll_group_000", 00:22:26.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:26.498 "listen_address": { 00:22:26.498 "trtype": "TCP", 00:22:26.498 "adrfam": "IPv4", 00:22:26.498 "traddr": "10.0.0.2", 00:22:26.498 "trsvcid": "4420" 00:22:26.498 }, 00:22:26.498 "peer_address": { 00:22:26.498 "trtype": "TCP", 00:22:26.498 "adrfam": "IPv4", 00:22:26.498 "traddr": "10.0.0.1", 00:22:26.498 "trsvcid": "53378" 00:22:26.498 }, 00:22:26.498 "auth": { 00:22:26.498 "state": "completed", 00:22:26.498 "digest": "sha256", 00:22:26.498 "dhgroup": "ffdhe8192" 00:22:26.498 } 00:22:26.498 } 00:22:26.498 ]' 00:22:26.498 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.498 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:26.498 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.498 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:26.498 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.498 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.498 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.498 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.759 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:26.759 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:27.347 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.347 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:27.347 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.347 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.347 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.347 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:27.347 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:27.347 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.347 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:27.347 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:27.607 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:22:27.607 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.607 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:27.607 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:27.607 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:27.607 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.607 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.607 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.607 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.607 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.607 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.607 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.607 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.867 00:22:27.867 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.867 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.867 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.127 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.127 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.127 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.127 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.127 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.127 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.127 { 00:22:28.127 "cntlid": 49, 00:22:28.127 "qid": 0, 00:22:28.127 "state": "enabled", 00:22:28.127 "thread": "nvmf_tgt_poll_group_000", 00:22:28.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:28.127 "listen_address": { 00:22:28.127 "trtype": "TCP", 00:22:28.127 "adrfam": "IPv4", 00:22:28.127 "traddr": "10.0.0.2", 00:22:28.127 "trsvcid": "4420" 00:22:28.127 }, 00:22:28.127 "peer_address": { 00:22:28.127 "trtype": "TCP", 00:22:28.127 "adrfam": "IPv4", 00:22:28.127 "traddr": "10.0.0.1", 00:22:28.127 "trsvcid": "46410" 00:22:28.127 }, 00:22:28.127 "auth": { 00:22:28.127 "state": "completed", 00:22:28.127 "digest": "sha384", 00:22:28.127 "dhgroup": "null" 00:22:28.127 } 00:22:28.127 } 00:22:28.127 ]' 00:22:28.127 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.127 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:28.128 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.128 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:28.128 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.128 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.128 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.128 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.388 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:28.388 08:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.328 08:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.588 00:22:29.588 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.588 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.588 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.849 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.849 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.849 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.849 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.849 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.849 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.849 { 00:22:29.849 "cntlid": 51, 00:22:29.849 "qid": 0, 00:22:29.849 "state": "enabled", 00:22:29.849 "thread": "nvmf_tgt_poll_group_000", 00:22:29.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:29.849 "listen_address": { 00:22:29.849 "trtype": "TCP", 00:22:29.849 "adrfam": "IPv4", 00:22:29.849 "traddr": "10.0.0.2", 00:22:29.849 "trsvcid": "4420" 00:22:29.849 }, 00:22:29.849 "peer_address": { 00:22:29.849 "trtype": "TCP", 00:22:29.849 "adrfam": "IPv4", 00:22:29.849 "traddr": "10.0.0.1", 00:22:29.849 "trsvcid": "46434" 00:22:29.849 }, 00:22:29.849 "auth": { 00:22:29.849 "state": "completed", 00:22:29.849 "digest": "sha384", 00:22:29.849 "dhgroup": "null" 00:22:29.849 } 00:22:29.849 } 00:22:29.849 ]' 00:22:29.849 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.849 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:29.849 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.849 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:29.849 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.849 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.849 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.849 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.109 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:30.109 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:30.678 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.939 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.940 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.940 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.200 00:22:31.200 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.200 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.200 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.460 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.460 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.460 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.460 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.460 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.460 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.460 { 00:22:31.460 "cntlid": 53, 00:22:31.460 "qid": 0, 00:22:31.460 "state": "enabled", 00:22:31.460 "thread": "nvmf_tgt_poll_group_000", 00:22:31.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:31.460 "listen_address": { 00:22:31.460 "trtype": "TCP", 00:22:31.460 "adrfam": "IPv4", 00:22:31.460 "traddr": "10.0.0.2", 00:22:31.460 "trsvcid": "4420" 00:22:31.460 }, 00:22:31.460 "peer_address": { 00:22:31.460 "trtype": "TCP", 00:22:31.460 "adrfam": "IPv4", 00:22:31.460 "traddr": "10.0.0.1", 00:22:31.460 "trsvcid": "46466" 00:22:31.460 }, 00:22:31.460 "auth": { 00:22:31.460 "state": "completed", 00:22:31.460 "digest": "sha384", 00:22:31.460 "dhgroup": "null" 00:22:31.460 } 00:22:31.460 } 00:22:31.460 ]' 00:22:31.460 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.460 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:31.460 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.460 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:31.460 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.720 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.720 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.720 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.720 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:31.721 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:32.664 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.664 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:32.664 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.664 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.664 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:32.665 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:32.925 00:22:32.925 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.925 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.925 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.186 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.186 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.186 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.186 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.186 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.186 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.186 { 00:22:33.186 "cntlid": 55, 00:22:33.186 "qid": 0, 00:22:33.186 "state": "enabled", 00:22:33.186 "thread": "nvmf_tgt_poll_group_000", 00:22:33.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:33.186 "listen_address": { 00:22:33.186 "trtype": "TCP", 00:22:33.186 "adrfam": "IPv4", 00:22:33.186 "traddr": "10.0.0.2", 00:22:33.186 "trsvcid": "4420" 00:22:33.186 }, 00:22:33.186 "peer_address": { 00:22:33.186 "trtype": "TCP", 00:22:33.186 "adrfam": "IPv4", 00:22:33.186 "traddr": "10.0.0.1", 00:22:33.186 "trsvcid": "46498" 00:22:33.186 }, 00:22:33.186 "auth": { 00:22:33.186 "state": "completed", 00:22:33.186 "digest": "sha384", 00:22:33.186 "dhgroup": "null" 00:22:33.186 } 00:22:33.186 } 00:22:33.186 ]' 00:22:33.186 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.186 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:33.186 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.186 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:33.186 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.447 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.447 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.447 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.447 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:33.447 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:34.391 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.391 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:34.391 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.391 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.391 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.391 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.391 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.391 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:34.391 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:34.391 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:22:34.391 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.391 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:34.391 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:34.391 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:34.391 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.391 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.391 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.391 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.391 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.391 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.391 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.391 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.650 00:22:34.650 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.650 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.650 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.911 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.911 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.911 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.911 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.911 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.911 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.911 { 00:22:34.911 "cntlid": 57, 00:22:34.911 "qid": 0, 00:22:34.911 "state": "enabled", 00:22:34.911 "thread": "nvmf_tgt_poll_group_000", 00:22:34.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:34.911 "listen_address": { 00:22:34.911 "trtype": "TCP", 00:22:34.911 "adrfam": "IPv4", 00:22:34.911 "traddr": "10.0.0.2", 00:22:34.911 "trsvcid": "4420" 00:22:34.911 }, 00:22:34.911 "peer_address": { 00:22:34.911 "trtype": "TCP", 00:22:34.911 "adrfam": "IPv4", 00:22:34.911 "traddr": "10.0.0.1", 00:22:34.911 "trsvcid": "46526" 00:22:34.911 }, 00:22:34.911 "auth": { 00:22:34.911 "state": "completed", 00:22:34.911 "digest": "sha384", 00:22:34.911 "dhgroup": "ffdhe2048" 00:22:34.911 } 00:22:34.911 } 00:22:34.911 ]' 00:22:34.911 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.911 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:34.911 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.911 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:34.911 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.911 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.911 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.911 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.171 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:35.171 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.114 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.375 00:22:36.375 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.375 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.375 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.635 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.635 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.635 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.635 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.635 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.635 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:36.635 { 00:22:36.635 "cntlid": 59, 00:22:36.635 "qid": 0, 00:22:36.635 "state": "enabled", 00:22:36.635 "thread": "nvmf_tgt_poll_group_000", 00:22:36.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:36.635 "listen_address": { 00:22:36.635 "trtype": "TCP", 00:22:36.635 "adrfam": "IPv4", 00:22:36.635 "traddr": "10.0.0.2", 00:22:36.635 "trsvcid": "4420" 00:22:36.635 }, 00:22:36.635 "peer_address": { 00:22:36.635 "trtype": "TCP", 00:22:36.635 "adrfam": "IPv4", 00:22:36.635 "traddr": "10.0.0.1", 00:22:36.635 "trsvcid": "46548" 00:22:36.635 }, 00:22:36.635 "auth": { 00:22:36.635 "state": "completed", 00:22:36.635 "digest": "sha384", 00:22:36.635 "dhgroup": "ffdhe2048" 00:22:36.635 } 00:22:36.635 } 00:22:36.635 ]' 00:22:36.635 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.635 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:36.635 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.635 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:36.635 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.635 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.635 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.635 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.895 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:36.895 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:37.840 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.840 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:37.840 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.840 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.840 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.840 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:37.840 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:37.841 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:37.841 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:22:37.841 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.841 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:37.841 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:37.841 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:37.841 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.841 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.841 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.841 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.841 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.841 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.841 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.841 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.102 00:22:38.102 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.102 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.102 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.364 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.364 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.364 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.364 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.364 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.364 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.364 { 00:22:38.364 "cntlid": 61, 00:22:38.364 "qid": 0, 00:22:38.364 "state": "enabled", 00:22:38.364 "thread": "nvmf_tgt_poll_group_000", 00:22:38.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:38.364 "listen_address": { 00:22:38.364 "trtype": "TCP", 00:22:38.364 "adrfam": "IPv4", 00:22:38.364 "traddr": "10.0.0.2", 00:22:38.364 "trsvcid": "4420" 00:22:38.364 }, 00:22:38.364 "peer_address": { 00:22:38.364 "trtype": "TCP", 00:22:38.364 "adrfam": "IPv4", 00:22:38.364 "traddr": "10.0.0.1", 00:22:38.364 "trsvcid": "50052" 00:22:38.364 }, 00:22:38.364 "auth": { 00:22:38.364 "state": "completed", 00:22:38.364 "digest": "sha384", 00:22:38.364 "dhgroup": "ffdhe2048" 00:22:38.364 } 00:22:38.364 } 00:22:38.364 ]' 00:22:38.364 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.364 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:38.364 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.364 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:38.364 08:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.364 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.364 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.364 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.624 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:38.624 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:39.566 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.566 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:39.566 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.566 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.566 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.566 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.566 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:39.566 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:39.566 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:22:39.566 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.566 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:39.566 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:39.566 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:39.566 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.566 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:39.566 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.567 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.567 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.567 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:39.567 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:39.567 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:39.827 00:22:39.827 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.827 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.827 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.088 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.088 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.088 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.088 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.088 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.088 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.088 { 00:22:40.088 "cntlid": 63, 00:22:40.088 "qid": 0, 00:22:40.088 "state": "enabled", 00:22:40.088 "thread": "nvmf_tgt_poll_group_000", 00:22:40.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:40.088 "listen_address": { 00:22:40.088 "trtype": "TCP", 00:22:40.088 "adrfam": "IPv4", 00:22:40.088 "traddr": "10.0.0.2", 00:22:40.089 "trsvcid": "4420" 00:22:40.089 }, 00:22:40.089 "peer_address": { 00:22:40.089 "trtype": "TCP", 00:22:40.089 "adrfam": "IPv4", 00:22:40.089 "traddr": "10.0.0.1", 00:22:40.089 "trsvcid": "50076" 00:22:40.089 }, 00:22:40.089 "auth": { 00:22:40.089 "state": "completed", 00:22:40.089 "digest": "sha384", 00:22:40.089 "dhgroup": "ffdhe2048" 00:22:40.089 } 00:22:40.089 } 00:22:40.089 ]' 00:22:40.089 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.089 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:40.089 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.089 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:40.089 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.089 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.089 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.089 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.350 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:40.350 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:40.922 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.189 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:41.189 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.189 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.189 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.189 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.189 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.189 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:41.190 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:41.190 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:22:41.190 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.190 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:41.190 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:41.190 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:41.190 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.190 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.190 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.190 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.190 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.190 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.190 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.190 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.453 00:22:41.454 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.454 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.454 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.714 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.714 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.714 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.715 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.715 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.715 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.715 { 00:22:41.715 "cntlid": 65, 00:22:41.715 "qid": 0, 00:22:41.715 "state": "enabled", 00:22:41.715 "thread": "nvmf_tgt_poll_group_000", 00:22:41.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:41.715 "listen_address": { 00:22:41.715 "trtype": "TCP", 00:22:41.715 "adrfam": "IPv4", 00:22:41.715 "traddr": "10.0.0.2", 00:22:41.715 "trsvcid": "4420" 00:22:41.715 }, 00:22:41.715 "peer_address": { 00:22:41.715 "trtype": "TCP", 00:22:41.715 "adrfam": "IPv4", 00:22:41.715 "traddr": "10.0.0.1", 00:22:41.715 "trsvcid": "50094" 00:22:41.715 }, 00:22:41.715 "auth": { 00:22:41.715 "state": "completed", 00:22:41.715 "digest": "sha384", 00:22:41.715 "dhgroup": "ffdhe3072" 00:22:41.715 } 00:22:41.715 } 00:22:41.715 ]' 00:22:41.715 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:41.715 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:41.715 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:41.715 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:41.715 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:41.715 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.715 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.715 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.975 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:41.975 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.917 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.177 00:22:43.177 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.177 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.177 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.438 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.438 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.438 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.438 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.438 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.438 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.438 { 00:22:43.438 "cntlid": 67, 00:22:43.438 "qid": 0, 00:22:43.438 "state": "enabled", 00:22:43.438 "thread": "nvmf_tgt_poll_group_000", 00:22:43.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:43.438 "listen_address": { 00:22:43.438 "trtype": "TCP", 00:22:43.438 "adrfam": "IPv4", 00:22:43.438 "traddr": "10.0.0.2", 00:22:43.438 "trsvcid": "4420" 00:22:43.438 }, 00:22:43.438 "peer_address": { 00:22:43.438 "trtype": "TCP", 00:22:43.438 "adrfam": "IPv4", 00:22:43.438 "traddr": "10.0.0.1", 00:22:43.438 "trsvcid": "50116" 00:22:43.438 }, 00:22:43.438 "auth": { 00:22:43.438 "state": "completed", 00:22:43.438 "digest": "sha384", 00:22:43.438 "dhgroup": "ffdhe3072" 00:22:43.438 } 00:22:43.438 } 00:22:43.438 ]' 00:22:43.438 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.438 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:43.438 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.438 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:43.438 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.438 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.438 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.438 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.699 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:43.699 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.643 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.907 00:22:44.907 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.907 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.907 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.168 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.168 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.168 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.168 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.168 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.168 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.168 { 00:22:45.168 "cntlid": 69, 00:22:45.168 "qid": 0, 00:22:45.168 "state": "enabled", 00:22:45.168 "thread": "nvmf_tgt_poll_group_000", 00:22:45.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:45.168 "listen_address": { 00:22:45.168 "trtype": "TCP", 00:22:45.168 "adrfam": "IPv4", 00:22:45.168 "traddr": "10.0.0.2", 00:22:45.168 "trsvcid": "4420" 00:22:45.168 }, 00:22:45.168 "peer_address": { 00:22:45.168 "trtype": "TCP", 00:22:45.168 "adrfam": "IPv4", 00:22:45.168 "traddr": "10.0.0.1", 00:22:45.168 "trsvcid": "50142" 00:22:45.168 }, 00:22:45.168 "auth": { 00:22:45.168 "state": "completed", 00:22:45.168 "digest": "sha384", 00:22:45.168 "dhgroup": "ffdhe3072" 00:22:45.168 } 00:22:45.168 } 00:22:45.168 ]' 00:22:45.168 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.168 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:45.168 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:45.168 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:45.168 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.168 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.168 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.168 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.428 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:45.428 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:46.371 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.371 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:46.371 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.371 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.371 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.371 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.371 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:46.371 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:46.371 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:22:46.371 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.371 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:46.371 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:46.371 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:46.371 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.371 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:46.371 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.371 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.371 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.371 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:46.371 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.371 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.632 00:22:46.632 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.632 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.632 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.893 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.893 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.893 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.893 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.893 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.893 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.893 { 00:22:46.893 "cntlid": 71, 00:22:46.893 "qid": 0, 00:22:46.893 "state": "enabled", 00:22:46.893 "thread": "nvmf_tgt_poll_group_000", 00:22:46.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:46.893 "listen_address": { 00:22:46.893 "trtype": "TCP", 00:22:46.893 "adrfam": "IPv4", 00:22:46.893 "traddr": "10.0.0.2", 00:22:46.893 "trsvcid": "4420" 00:22:46.893 }, 00:22:46.893 "peer_address": { 00:22:46.893 "trtype": "TCP", 00:22:46.893 "adrfam": "IPv4", 00:22:46.893 "traddr": "10.0.0.1", 00:22:46.893 "trsvcid": "50160" 00:22:46.893 }, 00:22:46.893 "auth": { 00:22:46.893 "state": "completed", 00:22:46.893 "digest": "sha384", 00:22:46.893 "dhgroup": "ffdhe3072" 00:22:46.893 } 00:22:46.893 } 00:22:46.893 ]' 00:22:46.893 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.893 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:46.893 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.893 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:46.893 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.893 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.893 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.894 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.154 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:47.154 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:48.096 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.096 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:48.096 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.096 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.096 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.096 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:48.096 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:48.096 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:48.096 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:48.096 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:22:48.096 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:48.096 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:48.096 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:48.096 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:48.096 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.097 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.097 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.097 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.097 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.097 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.097 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.097 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.358 00:22:48.358 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.358 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.358 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.620 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.620 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.620 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.620 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.620 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.620 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.620 { 00:22:48.620 "cntlid": 73, 00:22:48.620 "qid": 0, 00:22:48.620 "state": "enabled", 00:22:48.620 "thread": "nvmf_tgt_poll_group_000", 00:22:48.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:48.620 "listen_address": { 00:22:48.620 "trtype": "TCP", 00:22:48.620 "adrfam": "IPv4", 00:22:48.620 "traddr": "10.0.0.2", 00:22:48.620 "trsvcid": "4420" 00:22:48.620 }, 00:22:48.620 "peer_address": { 00:22:48.620 "trtype": "TCP", 00:22:48.620 "adrfam": "IPv4", 00:22:48.620 "traddr": "10.0.0.1", 00:22:48.620 "trsvcid": "36202" 00:22:48.620 }, 00:22:48.620 "auth": { 00:22:48.620 "state": "completed", 00:22:48.620 "digest": "sha384", 00:22:48.620 "dhgroup": "ffdhe4096" 00:22:48.620 } 00:22:48.620 } 00:22:48.620 ]' 00:22:48.620 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.620 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:48.620 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.620 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:48.620 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.620 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.620 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.620 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.881 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:48.881 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:49.821 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.821 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:49.821 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.821 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.821 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.821 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.821 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:49.821 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:49.821 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:22:49.821 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.821 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:49.821 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:49.821 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:49.821 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.822 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.822 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.822 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.822 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.822 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.822 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.822 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.082 00:22:50.082 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:50.082 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:50.082 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.343 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.343 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.343 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.343 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.343 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.343 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:50.343 { 00:22:50.343 "cntlid": 75, 00:22:50.343 "qid": 0, 00:22:50.343 "state": "enabled", 00:22:50.343 "thread": "nvmf_tgt_poll_group_000", 00:22:50.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:50.343 "listen_address": { 00:22:50.343 "trtype": "TCP", 00:22:50.343 "adrfam": "IPv4", 00:22:50.343 "traddr": "10.0.0.2", 00:22:50.343 "trsvcid": "4420" 00:22:50.343 }, 00:22:50.343 "peer_address": { 00:22:50.343 "trtype": "TCP", 00:22:50.343 "adrfam": "IPv4", 00:22:50.343 "traddr": "10.0.0.1", 00:22:50.343 "trsvcid": "36224" 00:22:50.343 }, 00:22:50.343 "auth": { 00:22:50.343 "state": "completed", 00:22:50.343 "digest": "sha384", 00:22:50.343 "dhgroup": "ffdhe4096" 00:22:50.343 } 00:22:50.343 } 00:22:50.343 ]' 00:22:50.343 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:50.343 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:50.343 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:50.343 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:50.343 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.343 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.343 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.343 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.603 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:50.603 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.546 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.860 00:22:51.860 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.860 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.860 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.135 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.135 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.135 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.135 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.135 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.135 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:52.135 { 00:22:52.135 "cntlid": 77, 00:22:52.135 "qid": 0, 00:22:52.135 "state": "enabled", 00:22:52.135 "thread": "nvmf_tgt_poll_group_000", 00:22:52.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:52.135 "listen_address": { 00:22:52.135 "trtype": "TCP", 00:22:52.135 "adrfam": "IPv4", 00:22:52.135 "traddr": "10.0.0.2", 00:22:52.135 "trsvcid": "4420" 00:22:52.135 }, 00:22:52.135 "peer_address": { 00:22:52.135 "trtype": "TCP", 00:22:52.135 "adrfam": "IPv4", 00:22:52.135 "traddr": "10.0.0.1", 00:22:52.135 "trsvcid": "36250" 00:22:52.135 }, 00:22:52.135 "auth": { 00:22:52.135 "state": "completed", 00:22:52.135 "digest": "sha384", 00:22:52.135 "dhgroup": "ffdhe4096" 00:22:52.135 } 00:22:52.135 } 00:22:52.135 ]' 00:22:52.135 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:52.135 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:52.135 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:52.135 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:52.135 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.135 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.135 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.135 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.430 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:52.431 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:53.003 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.264 08:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.525 00:22:53.525 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.525 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.525 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.786 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.786 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.786 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.786 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.786 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.786 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.786 { 00:22:53.786 "cntlid": 79, 00:22:53.786 "qid": 0, 00:22:53.786 "state": "enabled", 00:22:53.786 "thread": "nvmf_tgt_poll_group_000", 00:22:53.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:53.786 "listen_address": { 00:22:53.786 "trtype": "TCP", 00:22:53.786 "adrfam": "IPv4", 00:22:53.786 "traddr": "10.0.0.2", 00:22:53.786 "trsvcid": "4420" 00:22:53.786 }, 00:22:53.786 "peer_address": { 00:22:53.786 "trtype": "TCP", 00:22:53.786 "adrfam": "IPv4", 00:22:53.786 "traddr": "10.0.0.1", 00:22:53.786 "trsvcid": "36278" 00:22:53.786 }, 00:22:53.786 "auth": { 00:22:53.786 "state": "completed", 00:22:53.786 "digest": "sha384", 00:22:53.786 "dhgroup": "ffdhe4096" 00:22:53.786 } 00:22:53.786 } 00:22:53.786 ]' 00:22:53.786 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.786 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:53.786 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.786 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:53.786 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:54.046 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.046 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.046 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.046 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:54.046 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.023 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.594 00:22:55.594 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:55.594 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.594 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:55.594 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.594 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.594 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.594 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.594 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.594 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:55.594 { 00:22:55.594 "cntlid": 81, 00:22:55.594 "qid": 0, 00:22:55.594 "state": "enabled", 00:22:55.594 "thread": "nvmf_tgt_poll_group_000", 00:22:55.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:55.594 "listen_address": { 00:22:55.594 "trtype": "TCP", 00:22:55.594 "adrfam": "IPv4", 00:22:55.594 "traddr": "10.0.0.2", 00:22:55.594 "trsvcid": "4420" 00:22:55.594 }, 00:22:55.594 "peer_address": { 00:22:55.594 "trtype": "TCP", 00:22:55.594 "adrfam": "IPv4", 00:22:55.594 "traddr": "10.0.0.1", 00:22:55.594 "trsvcid": "36312" 00:22:55.594 }, 00:22:55.594 "auth": { 00:22:55.594 "state": "completed", 00:22:55.594 "digest": "sha384", 00:22:55.594 "dhgroup": "ffdhe6144" 00:22:55.594 } 00:22:55.594 } 00:22:55.594 ]' 00:22:55.594 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:55.594 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:55.594 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:55.854 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:55.854 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:55.854 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.854 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.854 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.854 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:55.854 08:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:22:56.796 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.797 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.368 00:22:57.368 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:57.368 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:57.368 08:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.368 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.368 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.368 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.368 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.368 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.368 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.368 { 00:22:57.368 "cntlid": 83, 00:22:57.368 "qid": 0, 00:22:57.368 "state": "enabled", 00:22:57.368 "thread": "nvmf_tgt_poll_group_000", 00:22:57.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:57.368 "listen_address": { 00:22:57.368 "trtype": "TCP", 00:22:57.368 "adrfam": "IPv4", 00:22:57.368 "traddr": "10.0.0.2", 00:22:57.368 "trsvcid": "4420" 00:22:57.368 }, 00:22:57.368 "peer_address": { 00:22:57.368 "trtype": "TCP", 00:22:57.368 "adrfam": "IPv4", 00:22:57.368 "traddr": "10.0.0.1", 00:22:57.368 "trsvcid": "36348" 00:22:57.368 }, 00:22:57.368 "auth": { 00:22:57.368 "state": "completed", 00:22:57.368 "digest": "sha384", 00:22:57.368 "dhgroup": "ffdhe6144" 00:22:57.368 } 00:22:57.368 } 00:22:57.368 ]' 00:22:57.368 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.630 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:57.630 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.630 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:57.630 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:57.630 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.630 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.630 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.890 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:57.890 08:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:22:58.461 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.461 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:58.461 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.461 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.461 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.461 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:58.461 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:58.461 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:58.723 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:58.723 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:58.723 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:58.723 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:58.723 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:58.723 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.723 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.723 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.723 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.723 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.723 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.723 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.723 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.984 00:22:59.245 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:59.245 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:59.245 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.245 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.245 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.245 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.245 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.245 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.245 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:59.245 { 00:22:59.245 "cntlid": 85, 00:22:59.245 "qid": 0, 00:22:59.245 "state": "enabled", 00:22:59.245 "thread": "nvmf_tgt_poll_group_000", 00:22:59.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:59.245 "listen_address": { 00:22:59.245 "trtype": "TCP", 00:22:59.245 "adrfam": "IPv4", 00:22:59.245 "traddr": "10.0.0.2", 00:22:59.245 "trsvcid": "4420" 00:22:59.245 }, 00:22:59.245 "peer_address": { 00:22:59.245 "trtype": "TCP", 00:22:59.245 "adrfam": "IPv4", 00:22:59.245 "traddr": "10.0.0.1", 00:22:59.245 "trsvcid": "58968" 00:22:59.245 }, 00:22:59.245 "auth": { 00:22:59.245 "state": "completed", 00:22:59.245 "digest": "sha384", 00:22:59.245 "dhgroup": "ffdhe6144" 00:22:59.245 } 00:22:59.245 } 00:22:59.245 ]' 00:22:59.245 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.245 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:59.245 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.507 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:59.507 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.507 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.507 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.507 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.507 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:22:59.507 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:23:00.450 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.450 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:00.450 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.450 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.450 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.450 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:00.450 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:00.450 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:00.711 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:23:00.711 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.711 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:00.711 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:00.711 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:00.711 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.711 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:00.711 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.711 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.711 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.711 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:00.711 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:00.711 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:00.971 00:23:00.971 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.971 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.971 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:01.232 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.232 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.232 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.232 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.232 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.232 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:01.232 { 00:23:01.232 "cntlid": 87, 00:23:01.232 "qid": 0, 00:23:01.232 "state": "enabled", 00:23:01.232 "thread": "nvmf_tgt_poll_group_000", 00:23:01.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:01.232 "listen_address": { 00:23:01.232 "trtype": "TCP", 00:23:01.233 "adrfam": "IPv4", 00:23:01.233 "traddr": "10.0.0.2", 00:23:01.233 "trsvcid": "4420" 00:23:01.233 }, 00:23:01.233 "peer_address": { 00:23:01.233 "trtype": "TCP", 00:23:01.233 "adrfam": "IPv4", 00:23:01.233 "traddr": "10.0.0.1", 00:23:01.233 "trsvcid": "58990" 00:23:01.233 }, 00:23:01.233 "auth": { 00:23:01.233 "state": "completed", 00:23:01.233 "digest": "sha384", 00:23:01.233 "dhgroup": "ffdhe6144" 00:23:01.233 } 00:23:01.233 } 00:23:01.233 ]' 00:23:01.233 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:01.233 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:01.233 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:01.233 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:01.233 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:01.233 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.233 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.233 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.493 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:01.493 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.438 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.438 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.438 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.438 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.438 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.011 00:23:03.011 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:03.011 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:03.011 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.011 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.011 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.011 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.011 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.011 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.011 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:03.011 { 00:23:03.011 "cntlid": 89, 00:23:03.011 "qid": 0, 00:23:03.011 "state": "enabled", 00:23:03.011 "thread": "nvmf_tgt_poll_group_000", 00:23:03.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:03.011 "listen_address": { 00:23:03.011 "trtype": "TCP", 00:23:03.011 "adrfam": "IPv4", 00:23:03.011 "traddr": "10.0.0.2", 00:23:03.011 "trsvcid": "4420" 00:23:03.011 }, 00:23:03.011 "peer_address": { 00:23:03.011 "trtype": "TCP", 00:23:03.011 "adrfam": "IPv4", 00:23:03.011 "traddr": "10.0.0.1", 00:23:03.011 "trsvcid": "59024" 00:23:03.011 }, 00:23:03.011 "auth": { 00:23:03.011 "state": "completed", 00:23:03.011 "digest": "sha384", 00:23:03.011 "dhgroup": "ffdhe8192" 00:23:03.011 } 00:23:03.011 } 00:23:03.011 ]' 00:23:03.011 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:03.272 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:03.272 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:03.272 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:03.272 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:03.272 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.272 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.272 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.533 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:03.533 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:04.105 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.105 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:04.105 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.105 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.105 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.105 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:04.105 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:04.105 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:04.366 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:23:04.366 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:04.366 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:04.366 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:04.366 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:04.366 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.366 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.366 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.366 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.366 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.366 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.366 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.366 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.939 00:23:04.939 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:04.939 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:04.939 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.200 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.200 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.200 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.200 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.200 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.200 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:05.200 { 00:23:05.200 "cntlid": 91, 00:23:05.200 "qid": 0, 00:23:05.200 "state": "enabled", 00:23:05.200 "thread": "nvmf_tgt_poll_group_000", 00:23:05.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:05.200 "listen_address": { 00:23:05.200 "trtype": "TCP", 00:23:05.200 "adrfam": "IPv4", 00:23:05.200 "traddr": "10.0.0.2", 00:23:05.200 "trsvcid": "4420" 00:23:05.200 }, 00:23:05.200 "peer_address": { 00:23:05.200 "trtype": "TCP", 00:23:05.200 "adrfam": "IPv4", 00:23:05.200 "traddr": "10.0.0.1", 00:23:05.200 "trsvcid": "59050" 00:23:05.200 }, 00:23:05.200 "auth": { 00:23:05.200 "state": "completed", 00:23:05.200 "digest": "sha384", 00:23:05.200 "dhgroup": "ffdhe8192" 00:23:05.200 } 00:23:05.200 } 00:23:05.200 ]' 00:23:05.200 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:05.200 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:05.200 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:05.200 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:05.200 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:05.201 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.201 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.201 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.462 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:23:05.462 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:23:06.406 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.406 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:06.406 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.406 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.406 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.406 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:06.406 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:06.406 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:06.406 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:23:06.406 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:06.406 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:06.406 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:06.406 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:06.406 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.406 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.406 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.406 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.406 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.406 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.406 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.406 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.979 00:23:06.979 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:06.979 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:06.979 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.241 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.241 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.241 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.241 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.241 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.241 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:07.241 { 00:23:07.241 "cntlid": 93, 00:23:07.241 "qid": 0, 00:23:07.241 "state": "enabled", 00:23:07.241 "thread": "nvmf_tgt_poll_group_000", 00:23:07.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:07.241 "listen_address": { 00:23:07.241 "trtype": "TCP", 00:23:07.241 "adrfam": "IPv4", 00:23:07.241 "traddr": "10.0.0.2", 00:23:07.241 "trsvcid": "4420" 00:23:07.241 }, 00:23:07.241 "peer_address": { 00:23:07.241 "trtype": "TCP", 00:23:07.241 "adrfam": "IPv4", 00:23:07.241 "traddr": "10.0.0.1", 00:23:07.241 "trsvcid": "59072" 00:23:07.241 }, 00:23:07.241 "auth": { 00:23:07.241 "state": "completed", 00:23:07.241 "digest": "sha384", 00:23:07.241 "dhgroup": "ffdhe8192" 00:23:07.241 } 00:23:07.241 } 00:23:07.241 ]' 00:23:07.241 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:07.241 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:07.241 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:07.241 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:07.241 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:07.241 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.241 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.241 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.502 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:23:07.502 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:23:08.443 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.443 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:08.443 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.443 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.443 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.443 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:08.443 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:08.443 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:08.443 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:23:08.443 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:08.443 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:08.443 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:08.443 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:08.443 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.443 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:08.443 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.443 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.443 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.443 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:08.443 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:08.443 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.015 00:23:09.015 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:09.015 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:09.015 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.277 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.277 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.277 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.277 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.277 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.277 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:09.277 { 00:23:09.277 "cntlid": 95, 00:23:09.277 "qid": 0, 00:23:09.277 "state": "enabled", 00:23:09.277 "thread": "nvmf_tgt_poll_group_000", 00:23:09.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:09.277 "listen_address": { 00:23:09.277 "trtype": "TCP", 00:23:09.277 "adrfam": "IPv4", 00:23:09.277 "traddr": "10.0.0.2", 00:23:09.277 "trsvcid": "4420" 00:23:09.277 }, 00:23:09.277 "peer_address": { 00:23:09.277 "trtype": "TCP", 00:23:09.277 "adrfam": "IPv4", 00:23:09.277 "traddr": "10.0.0.1", 00:23:09.277 "trsvcid": "44292" 00:23:09.277 }, 00:23:09.277 "auth": { 00:23:09.277 "state": "completed", 00:23:09.277 "digest": "sha384", 00:23:09.277 "dhgroup": "ffdhe8192" 00:23:09.277 } 00:23:09.277 } 00:23:09.277 ]' 00:23:09.277 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:09.277 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:09.277 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:09.277 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:09.277 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:09.277 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.277 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.277 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.539 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:09.539 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:10.112 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.112 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:10.112 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.112 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.112 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.112 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:10.112 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.112 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:10.112 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:10.380 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:10.380 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:23:10.380 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:10.380 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:10.380 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:10.380 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:10.380 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.380 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.380 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.380 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.380 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.380 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.380 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.380 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.642 00:23:10.642 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:10.642 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:10.642 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.902 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.902 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.902 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.902 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.902 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.902 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:10.902 { 00:23:10.902 "cntlid": 97, 00:23:10.902 "qid": 0, 00:23:10.902 "state": "enabled", 00:23:10.902 "thread": "nvmf_tgt_poll_group_000", 00:23:10.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:10.902 "listen_address": { 00:23:10.902 "trtype": "TCP", 00:23:10.902 "adrfam": "IPv4", 00:23:10.902 "traddr": "10.0.0.2", 00:23:10.902 "trsvcid": "4420" 00:23:10.902 }, 00:23:10.902 "peer_address": { 00:23:10.902 "trtype": "TCP", 00:23:10.902 "adrfam": "IPv4", 00:23:10.902 "traddr": "10.0.0.1", 00:23:10.902 "trsvcid": "44318" 00:23:10.902 }, 00:23:10.902 "auth": { 00:23:10.902 "state": "completed", 00:23:10.902 "digest": "sha512", 00:23:10.902 "dhgroup": "null" 00:23:10.902 } 00:23:10.902 } 00:23:10.902 ]' 00:23:10.902 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:10.902 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:10.902 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:10.903 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:10.903 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:10.903 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.903 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.903 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.164 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:11.164 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:12.113 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.113 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:12.113 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.113 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.113 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.113 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:12.113 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:12.114 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:12.114 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:23:12.114 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:12.114 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:12.114 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:12.114 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:12.114 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.114 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.114 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.114 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.114 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.114 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.114 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.114 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.376 00:23:12.376 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:12.376 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:12.376 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.637 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.637 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.637 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.637 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.637 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.637 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:12.637 { 00:23:12.638 "cntlid": 99, 00:23:12.638 "qid": 0, 00:23:12.638 "state": "enabled", 00:23:12.638 "thread": "nvmf_tgt_poll_group_000", 00:23:12.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:12.638 "listen_address": { 00:23:12.638 "trtype": "TCP", 00:23:12.638 "adrfam": "IPv4", 00:23:12.638 "traddr": "10.0.0.2", 00:23:12.638 "trsvcid": "4420" 00:23:12.638 }, 00:23:12.638 "peer_address": { 00:23:12.638 "trtype": "TCP", 00:23:12.638 "adrfam": "IPv4", 00:23:12.638 "traddr": "10.0.0.1", 00:23:12.638 "trsvcid": "44358" 00:23:12.638 }, 00:23:12.638 "auth": { 00:23:12.638 "state": "completed", 00:23:12.638 "digest": "sha512", 00:23:12.638 "dhgroup": "null" 00:23:12.638 } 00:23:12.638 } 00:23:12.638 ]' 00:23:12.638 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:12.638 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:12.638 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:12.638 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:12.638 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:12.638 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.638 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.638 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.898 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:23:12.898 08:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.841 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.102 00:23:14.102 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:14.102 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:14.102 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.363 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.363 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.363 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.363 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.363 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.363 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:14.363 { 00:23:14.363 "cntlid": 101, 00:23:14.363 "qid": 0, 00:23:14.363 "state": "enabled", 00:23:14.363 "thread": "nvmf_tgt_poll_group_000", 00:23:14.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:14.363 "listen_address": { 00:23:14.363 "trtype": "TCP", 00:23:14.363 "adrfam": "IPv4", 00:23:14.363 "traddr": "10.0.0.2", 00:23:14.363 "trsvcid": "4420" 00:23:14.363 }, 00:23:14.363 "peer_address": { 00:23:14.363 "trtype": "TCP", 00:23:14.363 "adrfam": "IPv4", 00:23:14.363 "traddr": "10.0.0.1", 00:23:14.363 "trsvcid": "44370" 00:23:14.363 }, 00:23:14.363 "auth": { 00:23:14.363 "state": "completed", 00:23:14.363 "digest": "sha512", 00:23:14.363 "dhgroup": "null" 00:23:14.363 } 00:23:14.363 } 00:23:14.363 ]' 00:23:14.363 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:14.363 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:14.363 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:14.363 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:14.363 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:14.363 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.363 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.363 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.642 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:23:14.642 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:23:15.590 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.590 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:15.590 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.590 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:15.590 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:15.851 00:23:15.851 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:15.851 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:15.851 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.111 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.111 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.111 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.111 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.111 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.112 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:16.112 { 00:23:16.112 "cntlid": 103, 00:23:16.112 "qid": 0, 00:23:16.112 "state": "enabled", 00:23:16.112 "thread": "nvmf_tgt_poll_group_000", 00:23:16.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:16.112 "listen_address": { 00:23:16.112 "trtype": "TCP", 00:23:16.112 "adrfam": "IPv4", 00:23:16.112 "traddr": "10.0.0.2", 00:23:16.112 "trsvcid": "4420" 00:23:16.112 }, 00:23:16.112 "peer_address": { 00:23:16.112 "trtype": "TCP", 00:23:16.112 "adrfam": "IPv4", 00:23:16.112 "traddr": "10.0.0.1", 00:23:16.112 "trsvcid": "44408" 00:23:16.112 }, 00:23:16.112 "auth": { 00:23:16.112 "state": "completed", 00:23:16.112 "digest": "sha512", 00:23:16.112 "dhgroup": "null" 00:23:16.112 } 00:23:16.112 } 00:23:16.112 ]' 00:23:16.112 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:16.112 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:16.112 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:16.112 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:16.112 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:16.112 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.112 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.112 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.373 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:16.373 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:16.944 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:17.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.205 08:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.466 00:23:17.467 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:17.467 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:17.467 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.728 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.728 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.728 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.728 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.728 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.728 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:17.728 { 00:23:17.728 "cntlid": 105, 00:23:17.728 "qid": 0, 00:23:17.728 "state": "enabled", 00:23:17.728 "thread": "nvmf_tgt_poll_group_000", 00:23:17.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:17.728 "listen_address": { 00:23:17.728 "trtype": "TCP", 00:23:17.728 "adrfam": "IPv4", 00:23:17.728 "traddr": "10.0.0.2", 00:23:17.728 "trsvcid": "4420" 00:23:17.728 }, 00:23:17.728 "peer_address": { 00:23:17.728 "trtype": "TCP", 00:23:17.728 "adrfam": "IPv4", 00:23:17.728 "traddr": "10.0.0.1", 00:23:17.728 "trsvcid": "60352" 00:23:17.728 }, 00:23:17.728 "auth": { 00:23:17.728 "state": "completed", 00:23:17.728 "digest": "sha512", 00:23:17.728 "dhgroup": "ffdhe2048" 00:23:17.728 } 00:23:17.728 } 00:23:17.728 ]' 00:23:17.728 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:17.728 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:17.728 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:17.728 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:17.728 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:17.728 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.728 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.728 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.988 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:17.988 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:18.929 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.929 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:18.929 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.929 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.929 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.929 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:18.929 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:18.929 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:19.189 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:23:19.189 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:19.189 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:19.189 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:19.189 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:19.189 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.189 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.189 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.189 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.189 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.189 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.189 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.189 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.189 00:23:19.450 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:19.450 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:19.450 08:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.450 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.450 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.450 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.450 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.450 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.450 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:19.450 { 00:23:19.450 "cntlid": 107, 00:23:19.450 "qid": 0, 00:23:19.450 "state": "enabled", 00:23:19.450 "thread": "nvmf_tgt_poll_group_000", 00:23:19.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:19.450 "listen_address": { 00:23:19.450 "trtype": "TCP", 00:23:19.450 "adrfam": "IPv4", 00:23:19.450 "traddr": "10.0.0.2", 00:23:19.450 "trsvcid": "4420" 00:23:19.450 }, 00:23:19.450 "peer_address": { 00:23:19.450 "trtype": "TCP", 00:23:19.450 "adrfam": "IPv4", 00:23:19.450 "traddr": "10.0.0.1", 00:23:19.450 "trsvcid": "60380" 00:23:19.450 }, 00:23:19.450 "auth": { 00:23:19.450 "state": "completed", 00:23:19.450 "digest": "sha512", 00:23:19.450 "dhgroup": "ffdhe2048" 00:23:19.450 } 00:23:19.450 } 00:23:19.450 ]' 00:23:19.450 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:19.450 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:19.450 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:19.711 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:19.711 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:19.711 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.711 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.711 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.711 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:23:19.711 08:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.651 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.912 00:23:20.912 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:20.912 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:20.912 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.172 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.172 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:21.172 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.172 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.172 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.172 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:21.172 { 00:23:21.172 "cntlid": 109, 00:23:21.172 "qid": 0, 00:23:21.172 "state": "enabled", 00:23:21.172 "thread": "nvmf_tgt_poll_group_000", 00:23:21.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:21.172 "listen_address": { 00:23:21.172 "trtype": "TCP", 00:23:21.172 "adrfam": "IPv4", 00:23:21.172 "traddr": "10.0.0.2", 00:23:21.172 "trsvcid": "4420" 00:23:21.172 }, 00:23:21.172 "peer_address": { 00:23:21.172 "trtype": "TCP", 00:23:21.172 "adrfam": "IPv4", 00:23:21.172 "traddr": "10.0.0.1", 00:23:21.172 "trsvcid": "60398" 00:23:21.172 }, 00:23:21.172 "auth": { 00:23:21.172 "state": "completed", 00:23:21.172 "digest": "sha512", 00:23:21.172 "dhgroup": "ffdhe2048" 00:23:21.172 } 00:23:21.172 } 00:23:21.172 ]' 00:23:21.172 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:21.172 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:21.172 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:21.172 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:21.172 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:21.172 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:21.172 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.173 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.433 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:23:21.433 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:23:22.388 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:22.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:22.388 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:22.388 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.388 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.388 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.388 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:22.388 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:22.388 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:22.388 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:23:22.388 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:22.388 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:22.388 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:22.388 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:22.388 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.388 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:22.388 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.389 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.389 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.389 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:22.389 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:22.389 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:22.650 00:23:22.650 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:22.650 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:22.650 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.910 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.910 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.910 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.910 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.910 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.910 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:22.910 { 00:23:22.910 "cntlid": 111, 00:23:22.910 "qid": 0, 00:23:22.910 "state": "enabled", 00:23:22.910 "thread": "nvmf_tgt_poll_group_000", 00:23:22.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:22.911 "listen_address": { 00:23:22.911 "trtype": "TCP", 00:23:22.911 "adrfam": "IPv4", 00:23:22.911 "traddr": "10.0.0.2", 00:23:22.911 "trsvcid": "4420" 00:23:22.911 }, 00:23:22.911 "peer_address": { 00:23:22.911 "trtype": "TCP", 00:23:22.911 "adrfam": "IPv4", 00:23:22.911 "traddr": "10.0.0.1", 00:23:22.911 "trsvcid": "60424" 00:23:22.911 }, 00:23:22.911 "auth": { 00:23:22.911 "state": "completed", 00:23:22.911 "digest": "sha512", 00:23:22.911 "dhgroup": "ffdhe2048" 00:23:22.911 } 00:23:22.911 } 00:23:22.911 ]' 00:23:22.911 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:22.911 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:22.911 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:22.911 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:22.911 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:22.911 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.911 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.911 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.172 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:23.172 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:24.115 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.115 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:24.115 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.115 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.115 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.115 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:24.115 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:24.115 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:24.115 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:24.115 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:23:24.115 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:24.115 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:24.115 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:24.115 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:24.115 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.116 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.116 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.116 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.116 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.116 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.116 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.116 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.376 00:23:24.376 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:24.376 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:24.376 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.636 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.636 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.636 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.636 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.636 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.636 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:24.636 { 00:23:24.636 "cntlid": 113, 00:23:24.636 "qid": 0, 00:23:24.636 "state": "enabled", 00:23:24.636 "thread": "nvmf_tgt_poll_group_000", 00:23:24.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:24.636 "listen_address": { 00:23:24.636 "trtype": "TCP", 00:23:24.636 "adrfam": "IPv4", 00:23:24.636 "traddr": "10.0.0.2", 00:23:24.636 "trsvcid": "4420" 00:23:24.636 }, 00:23:24.636 "peer_address": { 00:23:24.636 "trtype": "TCP", 00:23:24.636 "adrfam": "IPv4", 00:23:24.636 "traddr": "10.0.0.1", 00:23:24.636 "trsvcid": "60462" 00:23:24.636 }, 00:23:24.636 "auth": { 00:23:24.636 "state": "completed", 00:23:24.636 "digest": "sha512", 00:23:24.636 "dhgroup": "ffdhe3072" 00:23:24.636 } 00:23:24.636 } 00:23:24.636 ]' 00:23:24.636 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:24.636 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:24.636 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:24.636 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:24.636 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:24.636 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.636 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.636 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.895 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:24.895 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.838 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:26.100 00:23:26.100 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:26.100 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:26.100 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.360 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.360 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.360 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.361 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.361 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.361 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:26.361 { 00:23:26.361 "cntlid": 115, 00:23:26.361 "qid": 0, 00:23:26.361 "state": "enabled", 00:23:26.361 "thread": "nvmf_tgt_poll_group_000", 00:23:26.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:26.361 "listen_address": { 00:23:26.361 "trtype": "TCP", 00:23:26.361 "adrfam": "IPv4", 00:23:26.361 "traddr": "10.0.0.2", 00:23:26.361 "trsvcid": "4420" 00:23:26.361 }, 00:23:26.361 "peer_address": { 00:23:26.361 "trtype": "TCP", 00:23:26.361 "adrfam": "IPv4", 00:23:26.361 "traddr": "10.0.0.1", 00:23:26.361 "trsvcid": "60488" 00:23:26.361 }, 00:23:26.361 "auth": { 00:23:26.361 "state": "completed", 00:23:26.361 "digest": "sha512", 00:23:26.361 "dhgroup": "ffdhe3072" 00:23:26.361 } 00:23:26.361 } 00:23:26.361 ]' 00:23:26.361 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:26.361 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:26.361 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:26.361 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:26.361 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:26.620 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.620 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.621 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.621 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:23:26.621 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.564 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.825 00:23:27.825 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:27.825 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:27.825 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.086 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.086 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:28.086 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.086 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.086 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.086 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:28.086 { 00:23:28.086 "cntlid": 117, 00:23:28.086 "qid": 0, 00:23:28.086 "state": "enabled", 00:23:28.086 "thread": "nvmf_tgt_poll_group_000", 00:23:28.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:28.086 "listen_address": { 00:23:28.086 "trtype": "TCP", 00:23:28.086 "adrfam": "IPv4", 00:23:28.086 "traddr": "10.0.0.2", 00:23:28.086 "trsvcid": "4420" 00:23:28.086 }, 00:23:28.086 "peer_address": { 00:23:28.086 "trtype": "TCP", 00:23:28.086 "adrfam": "IPv4", 00:23:28.086 "traddr": "10.0.0.1", 00:23:28.086 "trsvcid": "34174" 00:23:28.086 }, 00:23:28.086 "auth": { 00:23:28.086 "state": "completed", 00:23:28.086 "digest": "sha512", 00:23:28.086 "dhgroup": "ffdhe3072" 00:23:28.086 } 00:23:28.086 } 00:23:28.086 ]' 00:23:28.086 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:28.086 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:28.086 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:28.086 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:28.086 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:28.346 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.346 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.346 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.346 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:23:28.347 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:29.288 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:29.550 00:23:29.550 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:29.550 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:29.550 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.810 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.810 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.810 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.810 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.810 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.810 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:29.810 { 00:23:29.810 "cntlid": 119, 00:23:29.810 "qid": 0, 00:23:29.810 "state": "enabled", 00:23:29.810 "thread": "nvmf_tgt_poll_group_000", 00:23:29.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:29.810 "listen_address": { 00:23:29.810 "trtype": "TCP", 00:23:29.810 "adrfam": "IPv4", 00:23:29.810 "traddr": "10.0.0.2", 00:23:29.810 "trsvcid": "4420" 00:23:29.810 }, 00:23:29.810 "peer_address": { 00:23:29.810 "trtype": "TCP", 00:23:29.810 "adrfam": "IPv4", 00:23:29.810 "traddr": "10.0.0.1", 00:23:29.810 "trsvcid": "34192" 00:23:29.810 }, 00:23:29.810 "auth": { 00:23:29.810 "state": "completed", 00:23:29.810 "digest": "sha512", 00:23:29.810 "dhgroup": "ffdhe3072" 00:23:29.810 } 00:23:29.810 } 00:23:29.810 ]' 00:23:29.810 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:29.810 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:29.810 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:29.810 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:29.810 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:30.071 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.071 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.071 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.071 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:30.071 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.012 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.273 00:23:31.273 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:31.273 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:31.273 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.533 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.533 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:31.533 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.533 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.533 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.533 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:31.533 { 00:23:31.533 "cntlid": 121, 00:23:31.533 "qid": 0, 00:23:31.533 "state": "enabled", 00:23:31.533 "thread": "nvmf_tgt_poll_group_000", 00:23:31.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:31.533 "listen_address": { 00:23:31.533 "trtype": "TCP", 00:23:31.533 "adrfam": "IPv4", 00:23:31.533 "traddr": "10.0.0.2", 00:23:31.533 "trsvcid": "4420" 00:23:31.533 }, 00:23:31.533 "peer_address": { 00:23:31.533 "trtype": "TCP", 00:23:31.533 "adrfam": "IPv4", 00:23:31.533 "traddr": "10.0.0.1", 00:23:31.533 "trsvcid": "34214" 00:23:31.533 }, 00:23:31.533 "auth": { 00:23:31.533 "state": "completed", 00:23:31.533 "digest": "sha512", 00:23:31.533 "dhgroup": "ffdhe4096" 00:23:31.533 } 00:23:31.533 } 00:23:31.533 ]' 00:23:31.533 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:31.533 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:31.533 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:31.533 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:31.533 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:31.794 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:31.794 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:31.794 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:31.794 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:31.794 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:32.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.776 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.056 00:23:33.056 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:33.056 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:33.056 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:33.320 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.320 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:33.320 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.320 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.320 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.320 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:33.320 { 00:23:33.320 "cntlid": 123, 00:23:33.320 "qid": 0, 00:23:33.320 "state": "enabled", 00:23:33.320 "thread": "nvmf_tgt_poll_group_000", 00:23:33.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:33.320 "listen_address": { 00:23:33.320 "trtype": "TCP", 00:23:33.320 "adrfam": "IPv4", 00:23:33.320 "traddr": "10.0.0.2", 00:23:33.320 "trsvcid": "4420" 00:23:33.320 }, 00:23:33.320 "peer_address": { 00:23:33.320 "trtype": "TCP", 00:23:33.320 "adrfam": "IPv4", 00:23:33.320 "traddr": "10.0.0.1", 00:23:33.320 "trsvcid": "34246" 00:23:33.320 }, 00:23:33.320 "auth": { 00:23:33.320 "state": "completed", 00:23:33.320 "digest": "sha512", 00:23:33.320 "dhgroup": "ffdhe4096" 00:23:33.320 } 00:23:33.320 } 00:23:33.320 ]' 00:23:33.320 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:33.320 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:33.320 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:33.320 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:33.320 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:33.580 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:33.580 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:33.581 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:33.581 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:23:33.581 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:23:34.522 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:34.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:34.522 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:34.522 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.522 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.522 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.522 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:34.522 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:34.522 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:34.522 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:23:34.522 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:34.522 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:34.522 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:34.522 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:34.522 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:34.522 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.522 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.523 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.523 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.523 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.523 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.523 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.782 00:23:34.782 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:34.782 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:34.782 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:35.042 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.042 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:35.042 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.042 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.042 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.042 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:35.042 { 00:23:35.042 "cntlid": 125, 00:23:35.042 "qid": 0, 00:23:35.042 "state": "enabled", 00:23:35.042 "thread": "nvmf_tgt_poll_group_000", 00:23:35.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:35.042 "listen_address": { 00:23:35.042 "trtype": "TCP", 00:23:35.042 "adrfam": "IPv4", 00:23:35.042 "traddr": "10.0.0.2", 00:23:35.042 "trsvcid": "4420" 00:23:35.042 }, 00:23:35.042 "peer_address": { 00:23:35.042 "trtype": "TCP", 00:23:35.042 "adrfam": "IPv4", 00:23:35.042 "traddr": "10.0.0.1", 00:23:35.042 "trsvcid": "34264" 00:23:35.042 }, 00:23:35.042 "auth": { 00:23:35.042 "state": "completed", 00:23:35.042 "digest": "sha512", 00:23:35.042 "dhgroup": "ffdhe4096" 00:23:35.042 } 00:23:35.042 } 00:23:35.042 ]' 00:23:35.042 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:35.042 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:35.042 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:35.042 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:35.042 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:35.303 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:35.303 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:35.303 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:35.303 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:23:35.303 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:36.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:36.245 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:36.505 00:23:36.505 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:36.505 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:36.505 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.765 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.765 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.765 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.765 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.765 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.765 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:36.765 { 00:23:36.765 "cntlid": 127, 00:23:36.765 "qid": 0, 00:23:36.765 "state": "enabled", 00:23:36.765 "thread": "nvmf_tgt_poll_group_000", 00:23:36.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:36.765 "listen_address": { 00:23:36.765 "trtype": "TCP", 00:23:36.765 "adrfam": "IPv4", 00:23:36.765 "traddr": "10.0.0.2", 00:23:36.765 "trsvcid": "4420" 00:23:36.765 }, 00:23:36.765 "peer_address": { 00:23:36.765 "trtype": "TCP", 00:23:36.765 "adrfam": "IPv4", 00:23:36.765 "traddr": "10.0.0.1", 00:23:36.765 "trsvcid": "34300" 00:23:36.765 }, 00:23:36.765 "auth": { 00:23:36.765 "state": "completed", 00:23:36.765 "digest": "sha512", 00:23:36.765 "dhgroup": "ffdhe4096" 00:23:36.765 } 00:23:36.765 } 00:23:36.765 ]' 00:23:36.765 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:36.765 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:36.765 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:37.026 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:37.026 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:37.026 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:37.026 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:37.026 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:37.026 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:37.026 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.966 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.537 00:23:38.537 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:38.537 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:38.537 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.537 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.537 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:38.537 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.537 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.537 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.537 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:38.537 { 00:23:38.537 "cntlid": 129, 00:23:38.537 "qid": 0, 00:23:38.537 "state": "enabled", 00:23:38.537 "thread": "nvmf_tgt_poll_group_000", 00:23:38.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:38.537 "listen_address": { 00:23:38.537 "trtype": "TCP", 00:23:38.537 "adrfam": "IPv4", 00:23:38.537 "traddr": "10.0.0.2", 00:23:38.537 "trsvcid": "4420" 00:23:38.537 }, 00:23:38.537 "peer_address": { 00:23:38.537 "trtype": "TCP", 00:23:38.537 "adrfam": "IPv4", 00:23:38.537 "traddr": "10.0.0.1", 00:23:38.537 "trsvcid": "55942" 00:23:38.537 }, 00:23:38.537 "auth": { 00:23:38.537 "state": "completed", 00:23:38.537 "digest": "sha512", 00:23:38.537 "dhgroup": "ffdhe6144" 00:23:38.537 } 00:23:38.537 } 00:23:38.537 ]' 00:23:38.537 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:38.797 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:38.797 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:38.797 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:38.797 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:38.797 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:38.797 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.798 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:39.058 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:39.058 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:39.628 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:39.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:39.629 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:39.629 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.629 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.629 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.629 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:39.629 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:39.629 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:39.889 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:23:39.889 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:39.889 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:39.889 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:39.889 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:39.889 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.889 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.889 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.889 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.889 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.889 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.889 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.889 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.150 00:23:40.411 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:40.411 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:40.411 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:40.411 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.411 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:40.411 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.411 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.411 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.411 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:40.411 { 00:23:40.411 "cntlid": 131, 00:23:40.411 "qid": 0, 00:23:40.411 "state": "enabled", 00:23:40.411 "thread": "nvmf_tgt_poll_group_000", 00:23:40.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:40.411 "listen_address": { 00:23:40.411 "trtype": "TCP", 00:23:40.411 "adrfam": "IPv4", 00:23:40.411 "traddr": "10.0.0.2", 00:23:40.411 "trsvcid": "4420" 00:23:40.411 }, 00:23:40.411 "peer_address": { 00:23:40.411 "trtype": "TCP", 00:23:40.411 "adrfam": "IPv4", 00:23:40.411 "traddr": "10.0.0.1", 00:23:40.411 "trsvcid": "55970" 00:23:40.411 }, 00:23:40.411 "auth": { 00:23:40.411 "state": "completed", 00:23:40.411 "digest": "sha512", 00:23:40.411 "dhgroup": "ffdhe6144" 00:23:40.411 } 00:23:40.411 } 00:23:40.411 ]' 00:23:40.411 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:40.411 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:40.411 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:40.672 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:40.672 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:40.672 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:40.672 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:40.672 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:40.932 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:23:40.932 08:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:23:41.502 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:41.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:41.502 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:41.502 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.502 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.502 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.502 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:41.502 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:41.503 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:41.763 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:23:41.763 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:41.763 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:41.763 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:41.763 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:41.763 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:41.763 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.763 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.763 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.763 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.763 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.763 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.763 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.023 00:23:42.023 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:42.023 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:42.023 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:42.284 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.284 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:42.284 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.284 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.284 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.284 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:42.284 { 00:23:42.284 "cntlid": 133, 00:23:42.284 "qid": 0, 00:23:42.284 "state": "enabled", 00:23:42.284 "thread": "nvmf_tgt_poll_group_000", 00:23:42.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:42.284 "listen_address": { 00:23:42.284 "trtype": "TCP", 00:23:42.284 "adrfam": "IPv4", 00:23:42.284 "traddr": "10.0.0.2", 00:23:42.284 "trsvcid": "4420" 00:23:42.284 }, 00:23:42.284 "peer_address": { 00:23:42.284 "trtype": "TCP", 00:23:42.284 "adrfam": "IPv4", 00:23:42.284 "traddr": "10.0.0.1", 00:23:42.284 "trsvcid": "55994" 00:23:42.284 }, 00:23:42.284 "auth": { 00:23:42.284 "state": "completed", 00:23:42.284 "digest": "sha512", 00:23:42.284 "dhgroup": "ffdhe6144" 00:23:42.284 } 00:23:42.284 } 00:23:42.284 ]' 00:23:42.284 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:42.284 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:42.284 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:42.284 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:42.284 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:42.544 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:42.544 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:42.544 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:42.545 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:23:42.545 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:23:43.487 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:43.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:43.487 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:43.487 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.487 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.487 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.487 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:43.487 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:43.487 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:43.487 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:23:43.487 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:43.487 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:43.487 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:43.487 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:43.487 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:43.487 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:43.487 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.487 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.487 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.487 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:43.487 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:43.487 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:43.747 00:23:44.007 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:44.007 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:44.007 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:44.007 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.007 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:44.007 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.007 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.007 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.007 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:44.007 { 00:23:44.007 "cntlid": 135, 00:23:44.007 "qid": 0, 00:23:44.007 "state": "enabled", 00:23:44.007 "thread": "nvmf_tgt_poll_group_000", 00:23:44.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:44.007 "listen_address": { 00:23:44.007 "trtype": "TCP", 00:23:44.007 "adrfam": "IPv4", 00:23:44.007 "traddr": "10.0.0.2", 00:23:44.007 "trsvcid": "4420" 00:23:44.007 }, 00:23:44.007 "peer_address": { 00:23:44.007 "trtype": "TCP", 00:23:44.007 "adrfam": "IPv4", 00:23:44.007 "traddr": "10.0.0.1", 00:23:44.007 "trsvcid": "56022" 00:23:44.007 }, 00:23:44.007 "auth": { 00:23:44.007 "state": "completed", 00:23:44.007 "digest": "sha512", 00:23:44.007 "dhgroup": "ffdhe6144" 00:23:44.007 } 00:23:44.007 } 00:23:44.007 ]' 00:23:44.007 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:44.007 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:44.007 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:44.267 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:44.267 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:44.267 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:44.267 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:44.267 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:44.267 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:44.268 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:45.207 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.207 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:45.207 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.207 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.207 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.207 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:45.207 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:45.207 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:45.207 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:45.468 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:23:45.468 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:45.468 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:45.468 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:45.468 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:45.468 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:45.468 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.468 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.468 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.468 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.468 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.468 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.468 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.038 00:23:46.038 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:46.038 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:46.038 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:46.038 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.038 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:46.038 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.038 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.038 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.038 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:46.038 { 00:23:46.038 "cntlid": 137, 00:23:46.038 "qid": 0, 00:23:46.038 "state": "enabled", 00:23:46.038 "thread": "nvmf_tgt_poll_group_000", 00:23:46.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:46.038 "listen_address": { 00:23:46.038 "trtype": "TCP", 00:23:46.038 "adrfam": "IPv4", 00:23:46.038 "traddr": "10.0.0.2", 00:23:46.038 "trsvcid": "4420" 00:23:46.038 }, 00:23:46.038 "peer_address": { 00:23:46.038 "trtype": "TCP", 00:23:46.038 "adrfam": "IPv4", 00:23:46.038 "traddr": "10.0.0.1", 00:23:46.038 "trsvcid": "56050" 00:23:46.038 }, 00:23:46.038 "auth": { 00:23:46.038 "state": "completed", 00:23:46.038 "digest": "sha512", 00:23:46.038 "dhgroup": "ffdhe8192" 00:23:46.038 } 00:23:46.038 } 00:23:46.038 ]' 00:23:46.038 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:46.038 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:46.038 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:46.299 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:46.299 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:46.299 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:46.299 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:46.299 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:46.299 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:46.299 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:47.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.239 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.809 00:23:47.809 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:47.809 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:47.809 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:48.070 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.070 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:48.070 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.070 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.070 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.070 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:48.070 { 00:23:48.070 "cntlid": 139, 00:23:48.070 "qid": 0, 00:23:48.070 "state": "enabled", 00:23:48.070 "thread": "nvmf_tgt_poll_group_000", 00:23:48.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:48.070 "listen_address": { 00:23:48.070 "trtype": "TCP", 00:23:48.070 "adrfam": "IPv4", 00:23:48.070 "traddr": "10.0.0.2", 00:23:48.070 "trsvcid": "4420" 00:23:48.070 }, 00:23:48.070 "peer_address": { 00:23:48.070 "trtype": "TCP", 00:23:48.070 "adrfam": "IPv4", 00:23:48.070 "traddr": "10.0.0.1", 00:23:48.070 "trsvcid": "37688" 00:23:48.070 }, 00:23:48.070 "auth": { 00:23:48.070 "state": "completed", 00:23:48.070 "digest": "sha512", 00:23:48.070 "dhgroup": "ffdhe8192" 00:23:48.070 } 00:23:48.070 } 00:23:48.070 ]' 00:23:48.070 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:48.070 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:48.070 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:48.070 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:48.070 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:48.070 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:48.070 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:48.070 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:48.330 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:23:48.330 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: --dhchap-ctrl-secret DHHC-1:02:ZTkyNjJlYTZlNTM1NTYyZDI1MzBmZDFhYWU4YzZjZjFjZTIxY2NkYTY4ZmJmYTUxtPN2Mw==: 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:49.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.272 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.843 00:23:49.843 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:49.843 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:49.843 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.103 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.104 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:50.104 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.104 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.104 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.104 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:50.104 { 00:23:50.104 "cntlid": 141, 00:23:50.104 "qid": 0, 00:23:50.104 "state": "enabled", 00:23:50.104 "thread": "nvmf_tgt_poll_group_000", 00:23:50.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:50.104 "listen_address": { 00:23:50.104 "trtype": "TCP", 00:23:50.104 "adrfam": "IPv4", 00:23:50.104 "traddr": "10.0.0.2", 00:23:50.104 "trsvcid": "4420" 00:23:50.104 }, 00:23:50.104 "peer_address": { 00:23:50.104 "trtype": "TCP", 00:23:50.104 "adrfam": "IPv4", 00:23:50.104 "traddr": "10.0.0.1", 00:23:50.104 "trsvcid": "37708" 00:23:50.104 }, 00:23:50.104 "auth": { 00:23:50.104 "state": "completed", 00:23:50.104 "digest": "sha512", 00:23:50.104 "dhgroup": "ffdhe8192" 00:23:50.104 } 00:23:50.104 } 00:23:50.104 ]' 00:23:50.104 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:50.104 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:50.104 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:50.104 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:50.104 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:50.104 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:50.104 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:50.104 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:50.364 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:23:50.364 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:01:M2M1OTUxMjcwZmRlMTJlNGY0ZTAwMTcxMDhmYTRlZWYTjIMV: 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:51.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:51.306 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:51.877 00:23:51.877 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:51.877 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:51.877 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.137 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.137 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.137 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.137 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.137 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.137 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:52.137 { 00:23:52.137 "cntlid": 143, 00:23:52.137 "qid": 0, 00:23:52.137 "state": "enabled", 00:23:52.138 "thread": "nvmf_tgt_poll_group_000", 00:23:52.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:52.138 "listen_address": { 00:23:52.138 "trtype": "TCP", 00:23:52.138 "adrfam": "IPv4", 00:23:52.138 "traddr": "10.0.0.2", 00:23:52.138 "trsvcid": "4420" 00:23:52.138 }, 00:23:52.138 "peer_address": { 00:23:52.138 "trtype": "TCP", 00:23:52.138 "adrfam": "IPv4", 00:23:52.138 "traddr": "10.0.0.1", 00:23:52.138 "trsvcid": "37728" 00:23:52.138 }, 00:23:52.138 "auth": { 00:23:52.138 "state": "completed", 00:23:52.138 "digest": "sha512", 00:23:52.138 "dhgroup": "ffdhe8192" 00:23:52.138 } 00:23:52.138 } 00:23:52.138 ]' 00:23:52.138 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:52.138 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:52.138 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:52.138 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:52.138 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:52.138 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.138 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.138 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:52.398 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:52.398 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:52.968 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:53.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.229 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.800 00:23:53.800 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:53.800 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:53.800 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:54.060 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.061 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:54.061 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.061 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.061 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.061 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:54.061 { 00:23:54.061 "cntlid": 145, 00:23:54.061 "qid": 0, 00:23:54.061 "state": "enabled", 00:23:54.061 "thread": "nvmf_tgt_poll_group_000", 00:23:54.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:54.061 "listen_address": { 00:23:54.061 "trtype": "TCP", 00:23:54.061 "adrfam": "IPv4", 00:23:54.061 "traddr": "10.0.0.2", 00:23:54.061 "trsvcid": "4420" 00:23:54.061 }, 00:23:54.061 "peer_address": { 00:23:54.061 "trtype": "TCP", 00:23:54.061 "adrfam": "IPv4", 00:23:54.061 "traddr": "10.0.0.1", 00:23:54.061 "trsvcid": "37754" 00:23:54.061 }, 00:23:54.061 "auth": { 00:23:54.061 "state": "completed", 00:23:54.061 "digest": "sha512", 00:23:54.061 "dhgroup": "ffdhe8192" 00:23:54.061 } 00:23:54.061 } 00:23:54.061 ]' 00:23:54.061 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:54.061 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:54.061 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:54.061 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:54.061 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:54.061 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:54.061 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:54.061 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.321 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:54.321 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGJlYWRmYjdjNjVlZGZiZmNmZDk0NjJjMjk3YjJhYzk2OWU3ZTk2Mzc2Y2FkZjFjr/0otQ==: --dhchap-ctrl-secret DHHC-1:03:MDRiMDU4NmIxODBmYjg4YTE0NmU1ZjIxYmQ5ZTJmNjNjZmY0NmRjOWFhNDM2ZWJiYzdlOWFmZDA3NTgwYWQyMk2vZvM=: 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:55.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:55.262 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:55.525 request: 00:23:55.525 { 00:23:55.525 "name": "nvme0", 00:23:55.525 "trtype": "tcp", 00:23:55.525 "traddr": "10.0.0.2", 00:23:55.525 "adrfam": "ipv4", 00:23:55.525 "trsvcid": "4420", 00:23:55.525 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:55.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:55.525 "prchk_reftag": false, 00:23:55.525 "prchk_guard": false, 00:23:55.525 "hdgst": false, 00:23:55.525 "ddgst": false, 00:23:55.525 "dhchap_key": "key2", 00:23:55.525 "allow_unrecognized_csi": false, 00:23:55.525 "method": "bdev_nvme_attach_controller", 00:23:55.525 "req_id": 1 00:23:55.525 } 00:23:55.525 Got JSON-RPC error response 00:23:55.525 response: 00:23:55.525 { 00:23:55.525 "code": -5, 00:23:55.525 "message": "Input/output error" 00:23:55.525 } 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:55.785 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:56.046 request: 00:23:56.046 { 00:23:56.046 "name": "nvme0", 00:23:56.046 "trtype": "tcp", 00:23:56.046 "traddr": "10.0.0.2", 00:23:56.046 "adrfam": "ipv4", 00:23:56.046 "trsvcid": "4420", 00:23:56.046 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:56.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:56.046 "prchk_reftag": false, 00:23:56.046 "prchk_guard": false, 00:23:56.046 "hdgst": false, 00:23:56.046 "ddgst": false, 00:23:56.046 "dhchap_key": "key1", 00:23:56.046 "dhchap_ctrlr_key": "ckey2", 00:23:56.046 "allow_unrecognized_csi": false, 00:23:56.046 "method": "bdev_nvme_attach_controller", 00:23:56.046 "req_id": 1 00:23:56.046 } 00:23:56.046 Got JSON-RPC error response 00:23:56.046 response: 00:23:56.046 { 00:23:56.046 "code": -5, 00:23:56.046 "message": "Input/output error" 00:23:56.046 } 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.307 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.567 request: 00:23:56.567 { 00:23:56.567 "name": "nvme0", 00:23:56.567 "trtype": "tcp", 00:23:56.567 "traddr": "10.0.0.2", 00:23:56.567 "adrfam": "ipv4", 00:23:56.567 "trsvcid": "4420", 00:23:56.567 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:56.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:56.567 "prchk_reftag": false, 00:23:56.567 "prchk_guard": false, 00:23:56.567 "hdgst": false, 00:23:56.567 "ddgst": false, 00:23:56.567 "dhchap_key": "key1", 00:23:56.567 "dhchap_ctrlr_key": "ckey1", 00:23:56.567 "allow_unrecognized_csi": false, 00:23:56.567 "method": "bdev_nvme_attach_controller", 00:23:56.567 "req_id": 1 00:23:56.567 } 00:23:56.567 Got JSON-RPC error response 00:23:56.567 response: 00:23:56.567 { 00:23:56.567 "code": -5, 00:23:56.567 "message": "Input/output error" 00:23:56.567 } 00:23:56.567 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:56.567 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:56.567 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:56.567 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:56.567 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:56.567 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.567 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1976299 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1976299 ']' 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1976299 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1976299 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1976299' 00:23:56.827 killing process with pid 1976299 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1976299 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1976299 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=2003481 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 2003481 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2003481 ']' 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.827 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.769 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.769 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:57.769 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:57.769 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:57.769 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.769 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.769 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:57.769 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2003481 00:23:57.769 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2003481 ']' 00:23:57.769 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.769 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.769 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.769 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.769 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.029 null0 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GLw 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.zif ]] 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zif 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.gHq 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.LBZ ]] 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LBZ 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.029 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Fol 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Bhi ]] 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Bhi 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ZUH 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.030 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.290 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.290 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:58.290 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:58.290 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:58.860 nvme0n1 00:23:59.122 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:59.122 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:59.122 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:59.122 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.122 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:59.122 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.122 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.122 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.122 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:59.122 { 00:23:59.122 "cntlid": 1, 00:23:59.122 "qid": 0, 00:23:59.122 "state": "enabled", 00:23:59.122 "thread": "nvmf_tgt_poll_group_000", 00:23:59.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:59.122 "listen_address": { 00:23:59.122 "trtype": "TCP", 00:23:59.122 "adrfam": "IPv4", 00:23:59.122 "traddr": "10.0.0.2", 00:23:59.122 "trsvcid": "4420" 00:23:59.122 }, 00:23:59.122 "peer_address": { 00:23:59.122 "trtype": "TCP", 00:23:59.122 "adrfam": "IPv4", 00:23:59.122 "traddr": "10.0.0.1", 00:23:59.122 "trsvcid": "55946" 00:23:59.122 }, 00:23:59.122 "auth": { 00:23:59.122 "state": "completed", 00:23:59.122 "digest": "sha512", 00:23:59.122 "dhgroup": "ffdhe8192" 00:23:59.122 } 00:23:59.122 } 00:23:59.122 ]' 00:23:59.122 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:59.383 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:59.383 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:59.383 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:59.383 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:59.383 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:59.383 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:59.383 08:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:59.383 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:23:59.383 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:24:00.324 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:00.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:00.324 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:00.324 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.324 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.324 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.324 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:00.324 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.324 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.324 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.324 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:24:00.324 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:24:00.585 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:00.585 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:00.585 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:00.585 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:00.585 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.585 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:00.585 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.585 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:00.585 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:00.585 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:00.585 request: 00:24:00.585 { 00:24:00.585 "name": "nvme0", 00:24:00.585 "trtype": "tcp", 00:24:00.585 "traddr": "10.0.0.2", 00:24:00.585 "adrfam": "ipv4", 00:24:00.585 "trsvcid": "4420", 00:24:00.585 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:00.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:00.585 "prchk_reftag": false, 00:24:00.585 "prchk_guard": false, 00:24:00.585 "hdgst": false, 00:24:00.585 "ddgst": false, 00:24:00.585 "dhchap_key": "key3", 00:24:00.585 "allow_unrecognized_csi": false, 00:24:00.585 "method": "bdev_nvme_attach_controller", 00:24:00.585 "req_id": 1 00:24:00.585 } 00:24:00.585 Got JSON-RPC error response 00:24:00.585 response: 00:24:00.585 { 00:24:00.585 "code": -5, 00:24:00.585 "message": "Input/output error" 00:24:00.585 } 00:24:00.585 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:00.585 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.585 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:00.585 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.586 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:24:00.586 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:24:00.586 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:00.586 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:00.847 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:00.847 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:00.847 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:00.847 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:00.847 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.847 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:00.847 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.847 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:00.847 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:00.847 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:01.108 request: 00:24:01.108 { 00:24:01.108 "name": "nvme0", 00:24:01.108 "trtype": "tcp", 00:24:01.108 "traddr": "10.0.0.2", 00:24:01.108 "adrfam": "ipv4", 00:24:01.108 "trsvcid": "4420", 00:24:01.108 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:01.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:01.108 "prchk_reftag": false, 00:24:01.108 "prchk_guard": false, 00:24:01.108 "hdgst": false, 00:24:01.108 "ddgst": false, 00:24:01.108 "dhchap_key": "key3", 00:24:01.108 "allow_unrecognized_csi": false, 00:24:01.108 "method": "bdev_nvme_attach_controller", 00:24:01.108 "req_id": 1 00:24:01.108 } 00:24:01.108 Got JSON-RPC error response 00:24:01.108 response: 00:24:01.108 { 00:24:01.108 "code": -5, 00:24:01.108 "message": "Input/output error" 00:24:01.108 } 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:01.108 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:01.680 request: 00:24:01.680 { 00:24:01.680 "name": "nvme0", 00:24:01.680 "trtype": "tcp", 00:24:01.680 "traddr": "10.0.0.2", 00:24:01.680 "adrfam": "ipv4", 00:24:01.680 "trsvcid": "4420", 00:24:01.680 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:01.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:01.680 "prchk_reftag": false, 00:24:01.680 "prchk_guard": false, 00:24:01.680 "hdgst": false, 00:24:01.680 "ddgst": false, 00:24:01.680 "dhchap_key": "key0", 00:24:01.680 "dhchap_ctrlr_key": "key1", 00:24:01.680 "allow_unrecognized_csi": false, 00:24:01.680 "method": "bdev_nvme_attach_controller", 00:24:01.680 "req_id": 1 00:24:01.680 } 00:24:01.680 Got JSON-RPC error response 00:24:01.680 response: 00:24:01.680 { 00:24:01.680 "code": -5, 00:24:01.680 "message": "Input/output error" 00:24:01.680 } 00:24:01.680 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:01.680 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:01.680 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:01.680 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:01.680 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:24:01.680 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:01.680 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:01.680 nvme0n1 00:24:01.680 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:24:01.680 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:24:01.680 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:01.941 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.941 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:01.941 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:02.203 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:24:02.203 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.203 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.203 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.203 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:02.203 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:02.203 08:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:03.143 nvme0n1 00:24:03.143 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:24:03.143 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:24:03.143 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:03.143 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.143 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:03.143 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.143 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.143 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.143 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:24:03.143 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:24:03.143 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:03.404 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.404 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:24:03.404 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: --dhchap-ctrl-secret DHHC-1:03:ZTI0ZTJlNTMyMzU0MjEzMTkyZDIzODNjMGFkZTI0YWE5MzZmOTAzNzYyYmIzN2ZmYmU3MjU2YmI1ZjViOWJlYvJQ0eA=: 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:04.347 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:04.928 request: 00:24:04.928 { 00:24:04.929 "name": "nvme0", 00:24:04.929 "trtype": "tcp", 00:24:04.929 "traddr": "10.0.0.2", 00:24:04.929 "adrfam": "ipv4", 00:24:04.929 "trsvcid": "4420", 00:24:04.929 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:04.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:04.929 "prchk_reftag": false, 00:24:04.929 "prchk_guard": false, 00:24:04.929 "hdgst": false, 00:24:04.929 "ddgst": false, 00:24:04.929 "dhchap_key": "key1", 00:24:04.929 "allow_unrecognized_csi": false, 00:24:04.929 "method": "bdev_nvme_attach_controller", 00:24:04.929 "req_id": 1 00:24:04.929 } 00:24:04.929 Got JSON-RPC error response 00:24:04.929 response: 00:24:04.929 { 00:24:04.929 "code": -5, 00:24:04.929 "message": "Input/output error" 00:24:04.929 } 00:24:04.929 08:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:04.929 08:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:04.929 08:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:04.929 08:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:04.929 08:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:04.929 08:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:04.929 08:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:05.871 nvme0n1 00:24:05.871 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:24:05.871 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:24:05.871 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:05.871 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.871 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:05.871 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:06.133 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:06.133 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.133 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.133 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.133 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:24:06.133 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:06.133 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:06.133 nvme0n1 00:24:06.394 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:24:06.394 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:06.394 08:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:24:06.394 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.394 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:06.394 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: '' 2s 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: ]] 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YzIxNTBjMDc4ZGFlMmMxMjcwYmRiMzdmNzUzMDNhZjBGrzwc: 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:06.655 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:08.569 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:24:08.569 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:24:08.569 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:08.569 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:08.569 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:08.569 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: 2s 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: ]] 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Zjc3MDk4ODkxY2NjNmU1MGE3NjdiNWZiYTgzYTU5YzNmYjYxYWQ5OWFjOTVhMTFha5uR3g==: 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:08.570 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:11.112 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:24:11.112 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:24:11.112 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:11.112 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:11.112 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:11.112 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:11.112 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:24:11.112 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:11.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:11.112 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:11.112 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.112 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.112 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.112 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:11.112 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:11.112 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:11.682 nvme0n1 00:24:11.682 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:11.682 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.682 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.682 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.682 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:11.682 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:12.253 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:24:12.253 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:24:12.253 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:12.253 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.253 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:12.253 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.253 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.253 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.253 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:24:12.253 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:24:12.514 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:24:12.514 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:24:12.514 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:12.774 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.774 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:12.774 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.774 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.774 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.774 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:12.774 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:12.774 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:12.774 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:24:12.774 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:12.774 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:24:12.774 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:12.774 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:12.774 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:13.344 request: 00:24:13.344 { 00:24:13.344 "name": "nvme0", 00:24:13.344 "dhchap_key": "key1", 00:24:13.344 "dhchap_ctrlr_key": "key3", 00:24:13.344 "method": "bdev_nvme_set_keys", 00:24:13.344 "req_id": 1 00:24:13.344 } 00:24:13.344 Got JSON-RPC error response 00:24:13.344 response: 00:24:13.344 { 00:24:13.344 "code": -13, 00:24:13.344 "message": "Permission denied" 00:24:13.344 } 00:24:13.344 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:13.344 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:13.344 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:13.344 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:13.344 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:13.344 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:13.344 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:13.344 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:24:13.344 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:24:14.439 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:14.439 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:14.439 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:14.700 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:24:14.700 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:14.700 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.700 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.700 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.700 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:14.700 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:14.700 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:15.644 nvme0n1 00:24:15.644 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:15.644 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.644 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.644 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.644 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:15.644 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:15.644 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:15.645 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:24:15.645 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.645 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:24:15.645 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.645 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:15.645 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:15.906 request: 00:24:15.906 { 00:24:15.906 "name": "nvme0", 00:24:15.906 "dhchap_key": "key2", 00:24:15.906 "dhchap_ctrlr_key": "key0", 00:24:15.906 "method": "bdev_nvme_set_keys", 00:24:15.906 "req_id": 1 00:24:15.906 } 00:24:15.906 Got JSON-RPC error response 00:24:15.906 response: 00:24:15.906 { 00:24:15.906 "code": -13, 00:24:15.906 "message": "Permission denied" 00:24:15.906 } 00:24:15.906 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:15.906 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:15.906 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:15.906 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:15.907 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:15.907 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:15.907 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:16.167 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:24:16.167 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:24:17.110 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:17.110 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:17.110 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:17.371 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:24:17.371 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:24:17.371 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:24:17.371 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1976521 00:24:17.371 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1976521 ']' 00:24:17.371 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1976521 00:24:17.371 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:17.371 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.371 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1976521 00:24:17.371 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:17.371 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:17.371 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1976521' 00:24:17.371 killing process with pid 1976521 00:24:17.371 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1976521 00:24:17.371 08:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1976521 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@99 -- # sync 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # set +e 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:17.632 rmmod nvme_tcp 00:24:17.632 rmmod nvme_fabrics 00:24:17.632 rmmod nvme_keyring 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # set -e 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # return 0 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # '[' -n 2003481 ']' 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@337 -- # killprocess 2003481 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2003481 ']' 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2003481 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2003481 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2003481' 00:24:17.632 killing process with pid 2003481 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2003481 00:24:17.632 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2003481 00:24:17.893 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:17.893 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # nvmf_fini 00:24:17.893 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@254 -- # local dev 00:24:17.893 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:17.893 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:17.893 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:17.893 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:19.811 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:19.811 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:19.811 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # return 0 00:24:19.811 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:19.811 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:19.811 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:19.811 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # _dev=0 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # dev_map=() 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@274 -- # iptr 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-save 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-restore 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.GLw /tmp/spdk.key-sha256.gHq /tmp/spdk.key-sha384.Fol /tmp/spdk.key-sha512.ZUH /tmp/spdk.key-sha512.zif /tmp/spdk.key-sha384.LBZ /tmp/spdk.key-sha256.Bhi '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:24:19.812 00:24:19.812 real 2m45.978s 00:24:19.812 user 6m7.867s 00:24:19.812 sys 0m25.371s 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:19.812 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.812 ************************************ 00:24:19.812 END TEST nvmf_auth_target 00:24:19.812 ************************************ 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:20.073 ************************************ 00:24:20.073 START TEST nvmf_bdevio_no_huge 00:24:20.073 ************************************ 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:20.073 * Looking for test storage... 00:24:20.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:20.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.073 --rc genhtml_branch_coverage=1 00:24:20.073 --rc genhtml_function_coverage=1 00:24:20.073 --rc genhtml_legend=1 00:24:20.073 --rc geninfo_all_blocks=1 00:24:20.073 --rc geninfo_unexecuted_blocks=1 00:24:20.073 00:24:20.073 ' 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:20.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.073 --rc genhtml_branch_coverage=1 00:24:20.073 --rc genhtml_function_coverage=1 00:24:20.073 --rc genhtml_legend=1 00:24:20.073 --rc geninfo_all_blocks=1 00:24:20.073 --rc geninfo_unexecuted_blocks=1 00:24:20.073 00:24:20.073 ' 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:20.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.073 --rc genhtml_branch_coverage=1 00:24:20.073 --rc genhtml_function_coverage=1 00:24:20.073 --rc genhtml_legend=1 00:24:20.073 --rc geninfo_all_blocks=1 00:24:20.073 --rc geninfo_unexecuted_blocks=1 00:24:20.073 00:24:20.073 ' 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:20.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.073 --rc genhtml_branch_coverage=1 00:24:20.073 --rc genhtml_function_coverage=1 00:24:20.073 --rc genhtml_legend=1 00:24:20.073 --rc geninfo_all_blocks=1 00:24:20.073 --rc geninfo_unexecuted_blocks=1 00:24:20.073 00:24:20.073 ' 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.073 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@50 -- # : 0 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:20.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # remove_target_ns 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # xtrace_disable 00:24:20.336 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # pci_devs=() 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # net_devs=() 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # e810=() 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # local -ga e810 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # x722=() 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # local -ga x722 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # mlx=() 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # local -ga mlx 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:28.482 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:28.482 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:28.483 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:28.483 Found net devices under 0000:31:00.0: cvl_0_0 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:28.483 Found net devices under 0000:31:00.1: cvl_0_1 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # is_hw=yes 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@247 -- # create_target_ns 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@28 -- # local -g _dev 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:24:28.483 08:21:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772161 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:28.483 10.0.0.1 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772162 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:28.483 10.0.0.2 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:24:28.483 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:24:28.484 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:28.484 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:28.484 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.484 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.484 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:28.484 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:28.745 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:28.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.642 ms 00:24:28.746 00:24:28.746 --- 10.0.0.1 ping statistics --- 00:24:28.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.746 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:28.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:24:28.746 00:24:28.746 --- 10.0.0.2 ping statistics --- 00:24:28.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.746 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # return 0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # return 1 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev= 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@160 -- # return 0 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.746 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target1 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # return 1 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev= 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@160 -- # return 0 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # nvmfpid=2012829 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # waitforlisten 2012829 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2012829 ']' 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.747 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:29.008 [2024-11-20 08:21:33.528604] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:24:29.008 [2024-11-20 08:21:33.528679] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:29.008 [2024-11-20 08:21:33.641950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:29.009 [2024-11-20 08:21:33.702225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.009 [2024-11-20 08:21:33.702266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.009 [2024-11-20 08:21:33.702274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.009 [2024-11-20 08:21:33.702282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.009 [2024-11-20 08:21:33.702288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.009 [2024-11-20 08:21:33.703626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:29.009 [2024-11-20 08:21:33.703787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:29.009 [2024-11-20 08:21:33.703935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:29.009 [2024-11-20 08:21:33.703949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:29.954 [2024-11-20 08:21:34.383563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:29.954 Malloc0 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:29.954 [2024-11-20 08:21:34.437382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # config=() 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # local subsystem config 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:29.954 { 00:24:29.954 "params": { 00:24:29.954 "name": "Nvme$subsystem", 00:24:29.954 "trtype": "$TEST_TRANSPORT", 00:24:29.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.954 "adrfam": "ipv4", 00:24:29.954 "trsvcid": "$NVMF_PORT", 00:24:29.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.954 "hdgst": ${hdgst:-false}, 00:24:29.954 "ddgst": ${ddgst:-false} 00:24:29.954 }, 00:24:29.954 "method": "bdev_nvme_attach_controller" 00:24:29.954 } 00:24:29.954 EOF 00:24:29.954 )") 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # cat 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # jq . 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@397 -- # IFS=, 00:24:29.954 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:24:29.954 "params": { 00:24:29.954 "name": "Nvme1", 00:24:29.955 "trtype": "tcp", 00:24:29.955 "traddr": "10.0.0.2", 00:24:29.955 "adrfam": "ipv4", 00:24:29.955 "trsvcid": "4420", 00:24:29.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:29.955 "hdgst": false, 00:24:29.955 "ddgst": false 00:24:29.955 }, 00:24:29.955 "method": "bdev_nvme_attach_controller" 00:24:29.955 }' 00:24:29.955 [2024-11-20 08:21:34.496486] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:24:29.955 [2024-11-20 08:21:34.496571] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2013169 ] 00:24:29.955 [2024-11-20 08:21:34.590920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:29.955 [2024-11-20 08:21:34.645949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.955 [2024-11-20 08:21:34.646304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.955 [2024-11-20 08:21:34.646310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.216 I/O targets: 00:24:30.216 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:30.216 00:24:30.216 00:24:30.216 CUnit - A unit testing framework for C - Version 2.1-3 00:24:30.216 http://cunit.sourceforge.net/ 00:24:30.216 00:24:30.216 00:24:30.216 Suite: bdevio tests on: Nvme1n1 00:24:30.478 Test: blockdev write read block ...passed 00:24:30.478 Test: blockdev write zeroes read block ...passed 00:24:30.478 Test: blockdev write zeroes read no split ...passed 00:24:30.478 Test: blockdev write zeroes read split ...passed 00:24:30.478 Test: blockdev write zeroes read split partial ...passed 00:24:30.478 Test: blockdev reset ...[2024-11-20 08:21:35.067241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:30.478 [2024-11-20 08:21:35.067313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1717fb0 (9): Bad file descriptor 00:24:30.478 [2024-11-20 08:21:35.136813] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:24:30.478 passed 00:24:30.478 Test: blockdev write read 8 blocks ...passed 00:24:30.740 Test: blockdev write read size > 128k ...passed 00:24:30.740 Test: blockdev write read invalid size ...passed 00:24:30.740 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:30.740 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:30.740 Test: blockdev write read max offset ...passed 00:24:30.740 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:30.740 Test: blockdev writev readv 8 blocks ...passed 00:24:30.740 Test: blockdev writev readv 30 x 1block ...passed 00:24:30.740 Test: blockdev writev readv block ...passed 00:24:30.740 Test: blockdev writev readv size > 128k ...passed 00:24:30.740 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:30.740 Test: blockdev comparev and writev ...[2024-11-20 08:21:35.444445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:30.740 [2024-11-20 08:21:35.444470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.740 [2024-11-20 08:21:35.444482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:30.740 [2024-11-20 08:21:35.444488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:30.740 [2024-11-20 08:21:35.444945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:30.740 [2024-11-20 08:21:35.444953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:30.740 [2024-11-20 08:21:35.444963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:30.740 [2024-11-20 08:21:35.444969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:30.740 [2024-11-20 08:21:35.445431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:30.740 [2024-11-20 08:21:35.445439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:30.741 [2024-11-20 08:21:35.445448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:30.741 [2024-11-20 08:21:35.445454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:30.741 [2024-11-20 08:21:35.445878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:30.741 [2024-11-20 08:21:35.445887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:30.741 [2024-11-20 08:21:35.445897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:30.741 [2024-11-20 08:21:35.445902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:31.001 passed 00:24:31.001 Test: blockdev nvme passthru rw ...passed 00:24:31.001 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:21:35.530724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:31.001 [2024-11-20 08:21:35.530734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.001 [2024-11-20 08:21:35.531081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:31.001 [2024-11-20 08:21:35.531089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:31.001 [2024-11-20 08:21:35.531435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:31.001 [2024-11-20 08:21:35.531444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:31.001 [2024-11-20 08:21:35.531787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:31.001 [2024-11-20 08:21:35.531795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:31.001 passed 00:24:31.001 Test: blockdev nvme admin passthru ...passed 00:24:31.001 Test: blockdev copy ...passed 00:24:31.001 00:24:31.001 Run Summary: Type Total Ran Passed Failed Inactive 00:24:31.001 suites 1 1 n/a 0 0 00:24:31.001 tests 23 23 23 0 0 00:24:31.002 asserts 152 152 152 0 n/a 00:24:31.002 00:24:31.002 Elapsed time = 1.395 seconds 00:24:31.263 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:31.263 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.263 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:31.263 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.263 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:31.263 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@99 -- # sync 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@102 -- # set +e 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:31.264 rmmod nvme_tcp 00:24:31.264 rmmod nvme_fabrics 00:24:31.264 rmmod nvme_keyring 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@106 -- # set -e 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@107 -- # return 0 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # '[' -n 2012829 ']' 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@337 -- # killprocess 2012829 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2012829 ']' 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2012829 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.264 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2012829 00:24:31.524 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:24:31.524 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:24:31.524 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2012829' 00:24:31.524 killing process with pid 2012829 00:24:31.524 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2012829 00:24:31.524 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2012829 00:24:31.785 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:31.785 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # nvmf_fini 00:24:31.785 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@254 -- # local dev 00:24:31.785 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:31.785 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:31.786 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:31.786 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:33.702 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:33.702 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:33.702 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # return 0 00:24:33.702 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:33.702 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # _dev=0 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # dev_map=() 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@274 -- # iptr 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-restore 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-save 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:33.703 00:24:33.703 real 0m13.750s 00:24:33.703 user 0m15.192s 00:24:33.703 sys 0m7.352s 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:33.703 ************************************ 00:24:33.703 END TEST nvmf_bdevio_no_huge 00:24:33.703 ************************************ 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:33.703 ************************************ 00:24:33.703 START TEST nvmf_tls 00:24:33.703 ************************************ 00:24:33.703 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:33.965 * Looking for test storage... 00:24:33.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:33.965 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:33.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.966 --rc genhtml_branch_coverage=1 00:24:33.966 --rc genhtml_function_coverage=1 00:24:33.966 --rc genhtml_legend=1 00:24:33.966 --rc geninfo_all_blocks=1 00:24:33.966 --rc geninfo_unexecuted_blocks=1 00:24:33.966 00:24:33.966 ' 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:33.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.966 --rc genhtml_branch_coverage=1 00:24:33.966 --rc genhtml_function_coverage=1 00:24:33.966 --rc genhtml_legend=1 00:24:33.966 --rc geninfo_all_blocks=1 00:24:33.966 --rc geninfo_unexecuted_blocks=1 00:24:33.966 00:24:33.966 ' 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:33.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.966 --rc genhtml_branch_coverage=1 00:24:33.966 --rc genhtml_function_coverage=1 00:24:33.966 --rc genhtml_legend=1 00:24:33.966 --rc geninfo_all_blocks=1 00:24:33.966 --rc geninfo_unexecuted_blocks=1 00:24:33.966 00:24:33.966 ' 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:33.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.966 --rc genhtml_branch_coverage=1 00:24:33.966 --rc genhtml_function_coverage=1 00:24:33.966 --rc genhtml_legend=1 00:24:33.966 --rc geninfo_all_blocks=1 00:24:33.966 --rc geninfo_unexecuted_blocks=1 00:24:33.966 00:24:33.966 ' 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@50 -- # : 0 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:33.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # remove_target_ns 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # xtrace_disable 00:24:33.966 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # pci_devs=() 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # net_devs=() 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # e810=() 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # local -ga e810 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # x722=() 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # local -ga x722 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # mlx=() 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # local -ga mlx 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:42.115 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:42.115 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:42.115 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:42.116 Found net devices under 0000:31:00.0: cvl_0_0 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:42.116 Found net devices under 0000:31:00.1: cvl_0_1 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # is_hw=yes 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@247 -- # create_target_ns 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@28 -- # local -g _dev 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772161 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:42.116 10.0.0.1 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772162 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:42.116 10.0.0.2 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:24:42.116 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:42.117 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:42.117 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:42.117 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:42.117 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:42.117 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:42.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.654 ms 00:24:42.378 00:24:42.378 --- 10.0.0.1 ping statistics --- 00:24:42.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.378 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:42.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:24:42.378 00:24:42.378 --- 10.0.0.2 ping statistics --- 00:24:42.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.378 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # return 0 00:24:42.378 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:42.379 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # return 1 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev= 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@160 -- # return 0 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target1 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # return 1 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev= 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@160 -- # return 0 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2018227 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2018227 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2018227 ']' 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.379 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.640 [2024-11-20 08:21:47.142885] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:24:42.640 [2024-11-20 08:21:47.142951] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.641 [2024-11-20 08:21:47.257602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.641 [2024-11-20 08:21:47.308609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.641 [2024-11-20 08:21:47.308656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.641 [2024-11-20 08:21:47.308665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.641 [2024-11-20 08:21:47.308673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.641 [2024-11-20 08:21:47.308679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.641 [2024-11-20 08:21:47.309464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.277 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.277 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:43.277 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:43.277 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:43.277 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.277 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.277 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:43.537 true 00:24:43.537 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:43.537 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # jq -r .tls_version 00:24:43.797 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # version=0 00:24:43.797 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # [[ 0 != \0 ]] 00:24:43.797 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:43.797 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:43.797 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # jq -r .tls_version 00:24:44.057 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # version=13 00:24:44.057 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@78 -- # [[ 13 != \1\3 ]] 00:24:44.057 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:44.318 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:44.318 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # jq -r .tls_version 00:24:44.318 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # version=7 00:24:44.318 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@86 -- # [[ 7 != \7 ]] 00:24:44.318 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:44.318 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # jq -r .enable_ktls 00:24:44.579 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # ktls=false 00:24:44.579 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@93 -- # [[ false != \f\a\l\s\e ]] 00:24:44.579 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:44.840 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:44.840 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # jq -r .enable_ktls 00:24:44.840 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # ktls=true 00:24:44.840 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@101 -- # [[ true != \t\r\u\e ]] 00:24:44.840 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:45.100 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:45.100 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # jq -r .enable_ktls 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # ktls=false 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@109 -- # [[ false != \f\a\l\s\e ]] 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@115 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=ffeeddccbbaa99887766554433221100 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@115 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@117 -- # mktemp 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@117 -- # key_path=/tmp/tmp.TqPjNcCBvE 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # mktemp 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key_2_path=/tmp/tmp.lwlDrBVqGJ 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # chmod 0600 /tmp/tmp.TqPjNcCBvE 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # chmod 0600 /tmp/tmp.lwlDrBVqGJ 00:24:45.362 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:45.624 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:45.885 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # setup_nvmf_tgt /tmp/tmp.TqPjNcCBvE 00:24:45.885 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TqPjNcCBvE 00:24:45.885 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:45.885 [2024-11-20 08:21:50.541417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.885 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:46.145 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:46.405 [2024-11-20 08:21:50.874238] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:46.405 [2024-11-20 08:21:50.874444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.405 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:46.405 malloc0 00:24:46.405 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:46.665 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TqPjNcCBvE 00:24:46.926 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:46.926 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TqPjNcCBvE 00:24:56.923 Initializing NVMe Controllers 00:24:56.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:56.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:56.923 Initialization complete. Launching workers. 00:24:56.923 ======================================================== 00:24:56.923 Latency(us) 00:24:56.923 Device Information : IOPS MiB/s Average min max 00:24:56.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18580.98 72.58 3444.46 1169.05 5376.68 00:24:56.923 ======================================================== 00:24:56.923 Total : 18580.98 72.58 3444.46 1169.05 5376.68 00:24:56.923 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@139 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TqPjNcCBvE 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TqPjNcCBvE 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2020972 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2020972 /var/tmp/bdevperf.sock 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2020972 ']' 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:57.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:57.184 [2024-11-20 08:22:01.707905] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:24:57.184 [2024-11-20 08:22:01.707964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020972 ] 00:24:57.184 [2024-11-20 08:22:01.771329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.184 [2024-11-20 08:22:01.800282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:57.184 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TqPjNcCBvE 00:24:57.445 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:57.706 [2024-11-20 08:22:02.197500] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:57.706 TLSTESTn1 00:24:57.706 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:57.706 Running I/O for 10 seconds... 00:24:59.666 6475.00 IOPS, 25.29 MiB/s [2024-11-20T07:22:05.783Z] 6557.00 IOPS, 25.61 MiB/s [2024-11-20T07:22:06.726Z] 6520.67 IOPS, 25.47 MiB/s [2024-11-20T07:22:07.668Z] 6505.50 IOPS, 25.41 MiB/s [2024-11-20T07:22:08.610Z] 6336.20 IOPS, 24.75 MiB/s [2024-11-20T07:22:09.552Z] 6292.50 IOPS, 24.58 MiB/s [2024-11-20T07:22:10.496Z] 6325.29 IOPS, 24.71 MiB/s [2024-11-20T07:22:11.439Z] 6297.25 IOPS, 24.60 MiB/s [2024-11-20T07:22:12.825Z] 6308.56 IOPS, 24.64 MiB/s [2024-11-20T07:22:12.825Z] 6319.70 IOPS, 24.69 MiB/s 00:25:08.096 Latency(us) 00:25:08.096 [2024-11-20T07:22:12.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.096 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:08.096 Verification LBA range: start 0x0 length 0x2000 00:25:08.096 TLSTESTn1 : 10.02 6322.17 24.70 0.00 0.00 20214.07 5406.72 23920.64 00:25:08.096 [2024-11-20T07:22:12.825Z] =================================================================================================================== 00:25:08.096 [2024-11-20T07:22:12.825Z] Total : 6322.17 24.70 0.00 0.00 20214.07 5406.72 23920.64 00:25:08.096 { 00:25:08.096 "results": [ 00:25:08.096 { 00:25:08.096 "job": "TLSTESTn1", 00:25:08.096 "core_mask": "0x4", 00:25:08.096 "workload": "verify", 00:25:08.096 "status": "finished", 00:25:08.096 "verify_range": { 00:25:08.096 "start": 0, 00:25:08.096 "length": 8192 00:25:08.096 }, 00:25:08.096 "queue_depth": 128, 00:25:08.096 "io_size": 4096, 00:25:08.096 "runtime": 10.016189, 00:25:08.096 "iops": 6322.165047005403, 00:25:08.096 "mibps": 24.695957214864855, 00:25:08.096 "io_failed": 0, 00:25:08.096 "io_timeout": 0, 00:25:08.096 "avg_latency_us": 20214.072635546294, 00:25:08.096 "min_latency_us": 5406.72, 00:25:08.096 "max_latency_us": 23920.64 00:25:08.096 } 00:25:08.096 ], 00:25:08.096 "core_count": 1 00:25:08.096 } 00:25:08.096 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:08.096 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2020972 00:25:08.096 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2020972 ']' 00:25:08.096 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2020972 00:25:08.096 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:08.096 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.096 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2020972 00:25:08.096 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:08.096 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:08.096 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2020972' 00:25:08.096 killing process with pid 2020972 00:25:08.096 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2020972 00:25:08.096 Received shutdown signal, test time was about 10.000000 seconds 00:25:08.096 00:25:08.096 Latency(us) 00:25:08.096 [2024-11-20T07:22:12.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.096 [2024-11-20T07:22:12.825Z] =================================================================================================================== 00:25:08.096 [2024-11-20T07:22:12.825Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:08.096 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2020972 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@142 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lwlDrBVqGJ 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lwlDrBVqGJ 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lwlDrBVqGJ 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lwlDrBVqGJ 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2023091 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2023091 /var/tmp/bdevperf.sock 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2023091 ']' 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:08.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.097 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:08.097 [2024-11-20 08:22:12.660902] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:08.097 [2024-11-20 08:22:12.660962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023091 ] 00:25:08.097 [2024-11-20 08:22:12.723955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.097 [2024-11-20 08:22:12.752718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.358 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.358 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:08.358 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lwlDrBVqGJ 00:25:08.358 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:08.620 [2024-11-20 08:22:13.145805] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:08.620 [2024-11-20 08:22:13.150874] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:08.620 [2024-11-20 08:22:13.151778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e9960 (107): Transport endpoint is not connected 00:25:08.620 [2024-11-20 08:22:13.152773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e9960 (9): Bad file descriptor 00:25:08.620 [2024-11-20 08:22:13.153775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:25:08.620 [2024-11-20 08:22:13.153786] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:08.620 [2024-11-20 08:22:13.153792] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:25:08.620 [2024-11-20 08:22:13.153800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:25:08.620 request: 00:25:08.620 { 00:25:08.620 "name": "TLSTEST", 00:25:08.620 "trtype": "tcp", 00:25:08.620 "traddr": "10.0.0.2", 00:25:08.620 "adrfam": "ipv4", 00:25:08.620 "trsvcid": "4420", 00:25:08.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:08.620 "prchk_reftag": false, 00:25:08.620 "prchk_guard": false, 00:25:08.620 "hdgst": false, 00:25:08.620 "ddgst": false, 00:25:08.620 "psk": "key0", 00:25:08.620 "allow_unrecognized_csi": false, 00:25:08.620 "method": "bdev_nvme_attach_controller", 00:25:08.620 "req_id": 1 00:25:08.620 } 00:25:08.620 Got JSON-RPC error response 00:25:08.620 response: 00:25:08.620 { 00:25:08.620 "code": -5, 00:25:08.620 "message": "Input/output error" 00:25:08.620 } 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2023091 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2023091 ']' 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2023091 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2023091 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2023091' 00:25:08.620 killing process with pid 2023091 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2023091 00:25:08.620 Received shutdown signal, test time was about 10.000000 seconds 00:25:08.620 00:25:08.620 Latency(us) 00:25:08.620 [2024-11-20T07:22:13.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.620 [2024-11-20T07:22:13.349Z] =================================================================================================================== 00:25:08.620 [2024-11-20T07:22:13.349Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2023091 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@145 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TqPjNcCBvE 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TqPjNcCBvE 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TqPjNcCBvE 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TqPjNcCBvE 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2023320 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2023320 /var/tmp/bdevperf.sock 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2023320 ']' 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:08.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.620 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.882 [2024-11-20 08:22:13.394284] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:08.882 [2024-11-20 08:22:13.394340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023320 ] 00:25:08.882 [2024-11-20 08:22:13.458515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.882 [2024-11-20 08:22:13.485668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.882 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.882 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:08.882 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TqPjNcCBvE 00:25:09.143 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:25:09.405 [2024-11-20 08:22:13.902829] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:09.405 [2024-11-20 08:22:13.912625] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:09.405 [2024-11-20 08:22:13.912644] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:09.405 [2024-11-20 08:22:13.912664] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:09.405 [2024-11-20 08:22:13.913024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143d960 (107): Transport endpoint is not connected 00:25:09.405 [2024-11-20 08:22:13.914021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143d960 (9): Bad file descriptor 00:25:09.405 [2024-11-20 08:22:13.915023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:25:09.405 [2024-11-20 08:22:13.915030] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:09.405 [2024-11-20 08:22:13.915036] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:25:09.405 [2024-11-20 08:22:13.915044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:25:09.405 request: 00:25:09.405 { 00:25:09.405 "name": "TLSTEST", 00:25:09.405 "trtype": "tcp", 00:25:09.405 "traddr": "10.0.0.2", 00:25:09.405 "adrfam": "ipv4", 00:25:09.405 "trsvcid": "4420", 00:25:09.405 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:09.405 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:09.405 "prchk_reftag": false, 00:25:09.405 "prchk_guard": false, 00:25:09.405 "hdgst": false, 00:25:09.405 "ddgst": false, 00:25:09.405 "psk": "key0", 00:25:09.405 "allow_unrecognized_csi": false, 00:25:09.405 "method": "bdev_nvme_attach_controller", 00:25:09.405 "req_id": 1 00:25:09.405 } 00:25:09.405 Got JSON-RPC error response 00:25:09.405 response: 00:25:09.405 { 00:25:09.405 "code": -5, 00:25:09.405 "message": "Input/output error" 00:25:09.405 } 00:25:09.405 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2023320 00:25:09.405 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2023320 ']' 00:25:09.405 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2023320 00:25:09.405 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:09.405 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.405 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2023320 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2023320' 00:25:09.405 killing process with pid 2023320 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2023320 00:25:09.405 Received shutdown signal, test time was about 10.000000 seconds 00:25:09.405 00:25:09.405 Latency(us) 00:25:09.405 [2024-11-20T07:22:14.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.405 [2024-11-20T07:22:14.134Z] =================================================================================================================== 00:25:09.405 [2024-11-20T07:22:14.134Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2023320 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@148 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TqPjNcCBvE 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TqPjNcCBvE 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TqPjNcCBvE 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TqPjNcCBvE 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2023354 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2023354 /var/tmp/bdevperf.sock 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2023354 ']' 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:09.405 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.406 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:09.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:09.406 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.406 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.667 [2024-11-20 08:22:14.160585] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:09.668 [2024-11-20 08:22:14.160638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023354 ] 00:25:09.668 [2024-11-20 08:22:14.225570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.668 [2024-11-20 08:22:14.254141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.668 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.668 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:09.668 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TqPjNcCBvE 00:25:09.928 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:10.189 [2024-11-20 08:22:14.659448] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:10.189 [2024-11-20 08:22:14.664429] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:10.189 [2024-11-20 08:22:14.664446] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:10.189 [2024-11-20 08:22:14.664465] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:10.189 [2024-11-20 08:22:14.664636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x830960 (107): Transport endpoint is not connected 00:25:10.189 [2024-11-20 08:22:14.665631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x830960 (9): Bad file descriptor 00:25:10.189 [2024-11-20 08:22:14.666633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:25:10.189 [2024-11-20 08:22:14.666641] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:10.189 [2024-11-20 08:22:14.666646] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:25:10.189 [2024-11-20 08:22:14.666654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:25:10.189 request: 00:25:10.189 { 00:25:10.189 "name": "TLSTEST", 00:25:10.189 "trtype": "tcp", 00:25:10.189 "traddr": "10.0.0.2", 00:25:10.189 "adrfam": "ipv4", 00:25:10.189 "trsvcid": "4420", 00:25:10.189 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:10.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:10.189 "prchk_reftag": false, 00:25:10.189 "prchk_guard": false, 00:25:10.189 "hdgst": false, 00:25:10.189 "ddgst": false, 00:25:10.189 "psk": "key0", 00:25:10.189 "allow_unrecognized_csi": false, 00:25:10.189 "method": "bdev_nvme_attach_controller", 00:25:10.189 "req_id": 1 00:25:10.189 } 00:25:10.189 Got JSON-RPC error response 00:25:10.189 response: 00:25:10.189 { 00:25:10.189 "code": -5, 00:25:10.189 "message": "Input/output error" 00:25:10.189 } 00:25:10.189 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2023354 00:25:10.189 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2023354 ']' 00:25:10.189 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2023354 00:25:10.189 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:10.189 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.189 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2023354 00:25:10.189 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:10.189 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:10.189 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2023354' 00:25:10.189 killing process with pid 2023354 00:25:10.189 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2023354 00:25:10.189 Received shutdown signal, test time was about 10.000000 seconds 00:25:10.189 00:25:10.189 Latency(us) 00:25:10.189 [2024-11-20T07:22:14.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.190 [2024-11-20T07:22:14.919Z] =================================================================================================================== 00:25:10.190 [2024-11-20T07:22:14.919Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2023354 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@151 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2023669 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2023669 /var/tmp/bdevperf.sock 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2023669 ']' 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:10.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.190 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.190 [2024-11-20 08:22:14.913189] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:10.190 [2024-11-20 08:22:14.913243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023669 ] 00:25:10.450 [2024-11-20 08:22:14.978570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.450 [2024-11-20 08:22:15.006165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:10.450 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.450 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:10.450 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:25:10.711 [2024-11-20 08:22:15.238907] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:25:10.711 [2024-11-20 08:22:15.238936] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:10.711 request: 00:25:10.711 { 00:25:10.711 "name": "key0", 00:25:10.711 "path": "", 00:25:10.711 "method": "keyring_file_add_key", 00:25:10.711 "req_id": 1 00:25:10.711 } 00:25:10.711 Got JSON-RPC error response 00:25:10.711 response: 00:25:10.711 { 00:25:10.711 "code": -1, 00:25:10.711 "message": "Operation not permitted" 00:25:10.711 } 00:25:10.711 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:10.711 [2024-11-20 08:22:15.423451] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:10.711 [2024-11-20 08:22:15.423481] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:25:10.711 request: 00:25:10.711 { 00:25:10.711 "name": "TLSTEST", 00:25:10.711 "trtype": "tcp", 00:25:10.711 "traddr": "10.0.0.2", 00:25:10.711 "adrfam": "ipv4", 00:25:10.711 "trsvcid": "4420", 00:25:10.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:10.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:10.711 "prchk_reftag": false, 00:25:10.711 "prchk_guard": false, 00:25:10.711 "hdgst": false, 00:25:10.711 "ddgst": false, 00:25:10.711 "psk": "key0", 00:25:10.711 "allow_unrecognized_csi": false, 00:25:10.711 "method": "bdev_nvme_attach_controller", 00:25:10.711 "req_id": 1 00:25:10.711 } 00:25:10.711 Got JSON-RPC error response 00:25:10.711 response: 00:25:10.711 { 00:25:10.711 "code": -126, 00:25:10.711 "message": "Required key not available" 00:25:10.711 } 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2023669 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2023669 ']' 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2023669 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2023669 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2023669' 00:25:10.973 killing process with pid 2023669 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2023669 00:25:10.973 Received shutdown signal, test time was about 10.000000 seconds 00:25:10.973 00:25:10.973 Latency(us) 00:25:10.973 [2024-11-20T07:22:15.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.973 [2024-11-20T07:22:15.702Z] =================================================================================================================== 00:25:10.973 [2024-11-20T07:22:15.702Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2023669 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@154 -- # killprocess 2018227 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2018227 ']' 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2018227 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2018227 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2018227' 00:25:10.973 killing process with pid 2018227 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2018227 00:25:10.973 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2018227 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=2 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # mktemp 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # key_long_path=/tmp/tmp.Sbd8on77ik 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@157 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # chmod 0600 /tmp/tmp.Sbd8on77ik 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # nvmfappstart -m 0x2 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2023712 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2023712 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2023712 ']' 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:11.235 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:11.235 [2024-11-20 08:22:15.902486] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:11.235 [2024-11-20 08:22:15.902565] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.496 [2024-11-20 08:22:15.999913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.496 [2024-11-20 08:22:16.030011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.496 [2024-11-20 08:22:16.030040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.496 [2024-11-20 08:22:16.030046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.496 [2024-11-20 08:22:16.030051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.496 [2024-11-20 08:22:16.030055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.496 [2024-11-20 08:22:16.030499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.197 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:12.197 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:12.197 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:12.197 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:12.197 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.197 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.197 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # setup_nvmf_tgt /tmp/tmp.Sbd8on77ik 00:25:12.197 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Sbd8on77ik 00:25:12.197 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:12.197 [2024-11-20 08:22:16.854463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.197 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:12.469 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:12.469 [2024-11-20 08:22:17.171249] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:12.469 [2024-11-20 08:22:17.171447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.469 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:12.731 malloc0 00:25:12.731 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:12.993 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Sbd8on77ik 00:25:12.993 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sbd8on77ik 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Sbd8on77ik 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2024226 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2024226 /var/tmp/bdevperf.sock 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2024226 ']' 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:13.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.254 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:13.254 [2024-11-20 08:22:17.846085] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:13.254 [2024-11-20 08:22:17.846129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2024226 ] 00:25:13.254 [2024-11-20 08:22:17.901957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.254 [2024-11-20 08:22:17.931016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.515 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.515 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:13.515 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Sbd8on77ik 00:25:13.515 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:13.776 [2024-11-20 08:22:18.316325] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:13.776 TLSTESTn1 00:25:13.776 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:13.776 Running I/O for 10 seconds... 00:25:16.106 4742.00 IOPS, 18.52 MiB/s [2024-11-20T07:22:21.777Z] 5220.00 IOPS, 20.39 MiB/s [2024-11-20T07:22:22.719Z] 5379.33 IOPS, 21.01 MiB/s [2024-11-20T07:22:23.662Z] 5499.75 IOPS, 21.48 MiB/s [2024-11-20T07:22:24.605Z] 5526.40 IOPS, 21.59 MiB/s [2024-11-20T07:22:25.547Z] 5601.17 IOPS, 21.88 MiB/s [2024-11-20T07:22:26.933Z] 5628.43 IOPS, 21.99 MiB/s [2024-11-20T07:22:27.875Z] 5602.38 IOPS, 21.88 MiB/s [2024-11-20T07:22:28.818Z] 5596.11 IOPS, 21.86 MiB/s [2024-11-20T07:22:28.818Z] 5643.10 IOPS, 22.04 MiB/s 00:25:24.089 Latency(us) 00:25:24.089 [2024-11-20T07:22:28.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.089 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:24.089 Verification LBA range: start 0x0 length 0x2000 00:25:24.089 TLSTESTn1 : 10.02 5643.94 22.05 0.00 0.00 22641.28 6307.84 26432.85 00:25:24.089 [2024-11-20T07:22:28.818Z] =================================================================================================================== 00:25:24.089 [2024-11-20T07:22:28.818Z] Total : 5643.94 22.05 0.00 0.00 22641.28 6307.84 26432.85 00:25:24.089 { 00:25:24.089 "results": [ 00:25:24.089 { 00:25:24.089 "job": "TLSTESTn1", 00:25:24.089 "core_mask": "0x4", 00:25:24.089 "workload": "verify", 00:25:24.089 "status": "finished", 00:25:24.089 "verify_range": { 00:25:24.089 "start": 0, 00:25:24.089 "length": 8192 00:25:24.089 }, 00:25:24.089 "queue_depth": 128, 00:25:24.089 "io_size": 4096, 00:25:24.089 "runtime": 10.021015, 00:25:24.089 "iops": 5643.93926164166, 00:25:24.089 "mibps": 22.046637740787734, 00:25:24.089 "io_failed": 0, 00:25:24.089 "io_timeout": 0, 00:25:24.089 "avg_latency_us": 22641.279117366244, 00:25:24.089 "min_latency_us": 6307.84, 00:25:24.089 "max_latency_us": 26432.853333333333 00:25:24.089 } 00:25:24.089 ], 00:25:24.089 "core_count": 1 00:25:24.089 } 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2024226 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2024226 ']' 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2024226 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2024226 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2024226' 00:25:24.089 killing process with pid 2024226 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2024226 00:25:24.089 Received shutdown signal, test time was about 10.000000 seconds 00:25:24.089 00:25:24.089 Latency(us) 00:25:24.089 [2024-11-20T07:22:28.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.089 [2024-11-20T07:22:28.818Z] =================================================================================================================== 00:25:24.089 [2024-11-20T07:22:28.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2024226 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # chmod 0666 /tmp/tmp.Sbd8on77ik 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sbd8on77ik 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sbd8on77ik 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sbd8on77ik 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Sbd8on77ik 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2026404 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2026404 /var/tmp/bdevperf.sock 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2026404 ']' 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.089 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:24.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:24.090 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.090 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:24.090 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:24.090 [2024-11-20 08:22:28.780150] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:24.090 [2024-11-20 08:22:28.780239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2026404 ] 00:25:24.351 [2024-11-20 08:22:28.847645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.351 [2024-11-20 08:22:28.876087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.351 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:24.351 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:24.351 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Sbd8on77ik 00:25:24.612 [2024-11-20 08:22:29.096594] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Sbd8on77ik': 0100666 00:25:24.612 [2024-11-20 08:22:29.096615] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:24.612 request: 00:25:24.612 { 00:25:24.612 "name": "key0", 00:25:24.612 "path": "/tmp/tmp.Sbd8on77ik", 00:25:24.612 "method": "keyring_file_add_key", 00:25:24.612 "req_id": 1 00:25:24.612 } 00:25:24.612 Got JSON-RPC error response 00:25:24.612 response: 00:25:24.612 { 00:25:24.612 "code": -1, 00:25:24.612 "message": "Operation not permitted" 00:25:24.612 } 00:25:24.612 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:24.612 [2024-11-20 08:22:29.281130] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:24.612 [2024-11-20 08:22:29.281154] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:25:24.612 request: 00:25:24.612 { 00:25:24.612 "name": "TLSTEST", 00:25:24.612 "trtype": "tcp", 00:25:24.612 "traddr": "10.0.0.2", 00:25:24.612 "adrfam": "ipv4", 00:25:24.612 "trsvcid": "4420", 00:25:24.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:24.612 "prchk_reftag": false, 00:25:24.612 "prchk_guard": false, 00:25:24.612 "hdgst": false, 00:25:24.612 "ddgst": false, 00:25:24.612 "psk": "key0", 00:25:24.612 "allow_unrecognized_csi": false, 00:25:24.612 "method": "bdev_nvme_attach_controller", 00:25:24.612 "req_id": 1 00:25:24.612 } 00:25:24.612 Got JSON-RPC error response 00:25:24.612 response: 00:25:24.612 { 00:25:24.612 "code": -126, 00:25:24.612 "message": "Required key not available" 00:25:24.612 } 00:25:24.612 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2026404 00:25:24.612 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2026404 ']' 00:25:24.612 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2026404 00:25:24.612 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:24.612 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:24.612 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2026404 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2026404' 00:25:24.873 killing process with pid 2026404 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2026404 00:25:24.873 Received shutdown signal, test time was about 10.000000 seconds 00:25:24.873 00:25:24.873 Latency(us) 00:25:24.873 [2024-11-20T07:22:29.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.873 [2024-11-20T07:22:29.602Z] =================================================================================================================== 00:25:24.873 [2024-11-20T07:22:29.602Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2026404 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # killprocess 2023712 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2023712 ']' 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2023712 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2023712 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2023712' 00:25:24.873 killing process with pid 2023712 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2023712 00:25:24.873 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2023712 00:25:25.134 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # nvmfappstart -m 0x2 00:25:25.134 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:25.134 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:25.134 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:25.134 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2026442 00:25:25.134 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2026442 00:25:25.134 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:25.134 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2026442 ']' 00:25:25.134 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.134 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.134 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.134 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.134 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:25.134 [2024-11-20 08:22:29.697754] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:25.134 [2024-11-20 08:22:29.697816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.134 [2024-11-20 08:22:29.794148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.134 [2024-11-20 08:22:29.823670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.134 [2024-11-20 08:22:29.823698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.134 [2024-11-20 08:22:29.823707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.134 [2024-11-20 08:22:29.823712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.134 [2024-11-20 08:22:29.823716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.134 [2024-11-20 08:22:29.824188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@173 -- # NOT setup_nvmf_tgt /tmp/tmp.Sbd8on77ik 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Sbd8on77ik 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Sbd8on77ik 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Sbd8on77ik 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:26.075 [2024-11-20 08:22:30.679759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.075 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:26.336 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:26.336 [2024-11-20 08:22:31.004585] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:26.336 [2024-11-20 08:22:31.004782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.336 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:26.597 malloc0 00:25:26.597 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:26.859 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Sbd8on77ik 00:25:26.859 [2024-11-20 08:22:31.475410] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Sbd8on77ik': 0100666 00:25:26.859 [2024-11-20 08:22:31.475428] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:26.859 request: 00:25:26.859 { 00:25:26.859 "name": "key0", 00:25:26.859 "path": "/tmp/tmp.Sbd8on77ik", 00:25:26.859 "method": "keyring_file_add_key", 00:25:26.859 "req_id": 1 00:25:26.859 } 00:25:26.859 Got JSON-RPC error response 00:25:26.859 response: 00:25:26.859 { 00:25:26.859 "code": -1, 00:25:26.859 "message": "Operation not permitted" 00:25:26.859 } 00:25:26.859 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:27.120 [2024-11-20 08:22:31.631818] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:25:27.120 [2024-11-20 08:22:31.631851] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:25:27.120 request: 00:25:27.120 { 00:25:27.120 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:27.120 "host": "nqn.2016-06.io.spdk:host1", 00:25:27.120 "psk": "key0", 00:25:27.120 "method": "nvmf_subsystem_add_host", 00:25:27.120 "req_id": 1 00:25:27.120 } 00:25:27.120 Got JSON-RPC error response 00:25:27.120 response: 00:25:27.120 { 00:25:27.120 "code": -32603, 00:25:27.120 "message": "Internal error" 00:25:27.120 } 00:25:27.120 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:27.120 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:27.120 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:27.120 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:27.120 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # killprocess 2026442 00:25:27.120 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2026442 ']' 00:25:27.120 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2026442 00:25:27.120 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:27.120 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.120 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2026442 00:25:27.120 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:27.120 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2026442' 00:25:27.121 killing process with pid 2026442 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2026442 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2026442 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # chmod 0600 /tmp/tmp.Sbd8on77ik 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # nvmfappstart -m 0x2 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2027000 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2027000 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2027000 ']' 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:27.121 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:27.382 [2024-11-20 08:22:31.883574] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:27.382 [2024-11-20 08:22:31.883629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.382 [2024-11-20 08:22:31.982502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.382 [2024-11-20 08:22:32.016446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.382 [2024-11-20 08:22:32.016485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.382 [2024-11-20 08:22:32.016491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.382 [2024-11-20 08:22:32.016496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.382 [2024-11-20 08:22:32.016501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.382 [2024-11-20 08:22:32.017065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.953 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.953 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:27.953 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:27.953 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:27.953 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:28.214 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.214 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # setup_nvmf_tgt /tmp/tmp.Sbd8on77ik 00:25:28.214 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Sbd8on77ik 00:25:28.214 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:28.214 [2024-11-20 08:22:32.871893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.214 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:28.475 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:28.475 [2024-11-20 08:22:33.196681] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:28.475 [2024-11-20 08:22:33.196881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.736 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:28.736 malloc0 00:25:28.736 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:28.996 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Sbd8on77ik 00:25:28.996 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:29.257 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # bdevperf_pid=2027487 00:25:29.257 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:29.257 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@183 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:29.257 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # waitforlisten 2027487 /var/tmp/bdevperf.sock 00:25:29.257 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2027487 ']' 00:25:29.257 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:29.257 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:29.257 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:29.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:29.257 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:29.257 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:29.257 [2024-11-20 08:22:33.910054] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:29.257 [2024-11-20 08:22:33.910110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2027487 ] 00:25:29.257 [2024-11-20 08:22:33.974890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.517 [2024-11-20 08:22:34.003805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.086 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.086 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:30.086 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Sbd8on77ik 00:25:30.347 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:30.347 [2024-11-20 08:22:35.018651] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:30.607 TLSTESTn1 00:25:30.607 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:25:30.868 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # tgtconf='{ 00:25:30.868 "subsystems": [ 00:25:30.868 { 00:25:30.868 "subsystem": "keyring", 00:25:30.868 "config": [ 00:25:30.868 { 00:25:30.868 "method": "keyring_file_add_key", 00:25:30.868 "params": { 00:25:30.868 "name": "key0", 00:25:30.868 "path": "/tmp/tmp.Sbd8on77ik" 00:25:30.868 } 00:25:30.868 } 00:25:30.868 ] 00:25:30.868 }, 00:25:30.868 { 00:25:30.868 "subsystem": "iobuf", 00:25:30.868 "config": [ 00:25:30.868 { 00:25:30.868 "method": "iobuf_set_options", 00:25:30.868 "params": { 00:25:30.868 "small_pool_count": 8192, 00:25:30.868 "large_pool_count": 1024, 00:25:30.868 "small_bufsize": 8192, 00:25:30.868 "large_bufsize": 135168, 00:25:30.868 "enable_numa": false 00:25:30.868 } 00:25:30.868 } 00:25:30.868 ] 00:25:30.868 }, 00:25:30.868 { 00:25:30.868 "subsystem": "sock", 00:25:30.868 "config": [ 00:25:30.868 { 00:25:30.868 "method": "sock_set_default_impl", 00:25:30.868 "params": { 00:25:30.868 "impl_name": "posix" 00:25:30.868 } 00:25:30.868 }, 00:25:30.868 { 00:25:30.868 "method": "sock_impl_set_options", 00:25:30.868 "params": { 00:25:30.868 "impl_name": "ssl", 00:25:30.868 "recv_buf_size": 4096, 00:25:30.868 "send_buf_size": 4096, 00:25:30.868 "enable_recv_pipe": true, 00:25:30.868 "enable_quickack": false, 00:25:30.868 "enable_placement_id": 0, 00:25:30.869 "enable_zerocopy_send_server": true, 00:25:30.869 "enable_zerocopy_send_client": false, 00:25:30.869 "zerocopy_threshold": 0, 00:25:30.869 "tls_version": 0, 00:25:30.869 "enable_ktls": false 00:25:30.869 } 00:25:30.869 }, 00:25:30.869 { 00:25:30.869 "method": "sock_impl_set_options", 00:25:30.869 "params": { 00:25:30.869 "impl_name": "posix", 00:25:30.869 "recv_buf_size": 2097152, 00:25:30.869 "send_buf_size": 2097152, 00:25:30.869 "enable_recv_pipe": true, 00:25:30.869 "enable_quickack": false, 00:25:30.869 "enable_placement_id": 0, 00:25:30.869 "enable_zerocopy_send_server": true, 00:25:30.869 "enable_zerocopy_send_client": false, 00:25:30.869 "zerocopy_threshold": 0, 00:25:30.869 "tls_version": 0, 00:25:30.869 "enable_ktls": false 00:25:30.869 } 00:25:30.869 } 00:25:30.869 ] 00:25:30.869 }, 00:25:30.869 { 00:25:30.869 "subsystem": "vmd", 00:25:30.869 "config": [] 00:25:30.869 }, 00:25:30.869 { 00:25:30.869 "subsystem": "accel", 00:25:30.869 "config": [ 00:25:30.869 { 00:25:30.869 "method": "accel_set_options", 00:25:30.869 "params": { 00:25:30.869 "small_cache_size": 128, 00:25:30.869 "large_cache_size": 16, 00:25:30.869 "task_count": 2048, 00:25:30.869 "sequence_count": 2048, 00:25:30.869 "buf_count": 2048 00:25:30.869 } 00:25:30.869 } 00:25:30.869 ] 00:25:30.869 }, 00:25:30.869 { 00:25:30.869 "subsystem": "bdev", 00:25:30.869 "config": [ 00:25:30.869 { 00:25:30.869 "method": "bdev_set_options", 00:25:30.869 "params": { 00:25:30.869 "bdev_io_pool_size": 65535, 00:25:30.869 "bdev_io_cache_size": 256, 00:25:30.869 "bdev_auto_examine": true, 00:25:30.869 "iobuf_small_cache_size": 128, 00:25:30.869 "iobuf_large_cache_size": 16 00:25:30.869 } 00:25:30.869 }, 00:25:30.869 { 00:25:30.869 "method": "bdev_raid_set_options", 00:25:30.869 "params": { 00:25:30.869 "process_window_size_kb": 1024, 00:25:30.869 "process_max_bandwidth_mb_sec": 0 00:25:30.869 } 00:25:30.869 }, 00:25:30.869 { 00:25:30.869 "method": "bdev_iscsi_set_options", 00:25:30.869 "params": { 00:25:30.869 "timeout_sec": 30 00:25:30.869 } 00:25:30.869 }, 00:25:30.869 { 00:25:30.869 "method": "bdev_nvme_set_options", 00:25:30.869 "params": { 00:25:30.869 "action_on_timeout": "none", 00:25:30.869 "timeout_us": 0, 00:25:30.869 "timeout_admin_us": 0, 00:25:30.869 "keep_alive_timeout_ms": 10000, 00:25:30.869 "arbitration_burst": 0, 00:25:30.869 "low_priority_weight": 0, 00:25:30.869 "medium_priority_weight": 0, 00:25:30.869 "high_priority_weight": 0, 00:25:30.869 "nvme_adminq_poll_period_us": 10000, 00:25:30.869 "nvme_ioq_poll_period_us": 0, 00:25:30.869 "io_queue_requests": 0, 00:25:30.869 "delay_cmd_submit": true, 00:25:30.869 "transport_retry_count": 4, 00:25:30.869 "bdev_retry_count": 3, 00:25:30.869 "transport_ack_timeout": 0, 00:25:30.869 "ctrlr_loss_timeout_sec": 0, 00:25:30.869 "reconnect_delay_sec": 0, 00:25:30.869 "fast_io_fail_timeout_sec": 0, 00:25:30.869 "disable_auto_failback": false, 00:25:30.869 "generate_uuids": false, 00:25:30.869 "transport_tos": 0, 00:25:30.869 "nvme_error_stat": false, 00:25:30.869 "rdma_srq_size": 0, 00:25:30.869 "io_path_stat": false, 00:25:30.869 "allow_accel_sequence": false, 00:25:30.869 "rdma_max_cq_size": 0, 00:25:30.869 "rdma_cm_event_timeout_ms": 0, 00:25:30.869 "dhchap_digests": [ 00:25:30.869 "sha256", 00:25:30.869 "sha384", 00:25:30.869 "sha512" 00:25:30.869 ], 00:25:30.869 "dhchap_dhgroups": [ 00:25:30.869 "null", 00:25:30.869 "ffdhe2048", 00:25:30.869 "ffdhe3072", 00:25:30.869 "ffdhe4096", 00:25:30.869 "ffdhe6144", 00:25:30.869 "ffdhe8192" 00:25:30.869 ] 00:25:30.869 } 00:25:30.869 }, 00:25:30.869 { 00:25:30.869 "method": "bdev_nvme_set_hotplug", 00:25:30.869 "params": { 00:25:30.869 "period_us": 100000, 00:25:30.869 "enable": false 00:25:30.869 } 00:25:30.869 }, 00:25:30.869 { 00:25:30.869 "method": "bdev_malloc_create", 00:25:30.869 "params": { 00:25:30.869 "name": "malloc0", 00:25:30.869 "num_blocks": 8192, 00:25:30.869 "block_size": 4096, 00:25:30.869 "physical_block_size": 4096, 00:25:30.869 "uuid": "f635d240-b2f6-460d-a58d-44de7c40a602", 00:25:30.869 "optimal_io_boundary": 0, 00:25:30.869 "md_size": 0, 00:25:30.869 "dif_type": 0, 00:25:30.869 "dif_is_head_of_md": false, 00:25:30.869 "dif_pi_format": 0 00:25:30.869 } 00:25:30.869 }, 00:25:30.869 { 00:25:30.869 "method": "bdev_wait_for_examine" 00:25:30.869 } 00:25:30.869 ] 00:25:30.869 }, 00:25:30.869 { 00:25:30.869 "subsystem": "nbd", 00:25:30.869 "config": [] 00:25:30.869 }, 00:25:30.869 { 00:25:30.869 "subsystem": "scheduler", 00:25:30.869 "config": [ 00:25:30.869 { 00:25:30.869 "method": "framework_set_scheduler", 00:25:30.869 "params": { 00:25:30.869 "name": "static" 00:25:30.869 } 00:25:30.869 } 00:25:30.869 ] 00:25:30.869 }, 00:25:30.869 { 00:25:30.869 "subsystem": "nvmf", 00:25:30.869 "config": [ 00:25:30.869 { 00:25:30.869 "method": "nvmf_set_config", 00:25:30.869 "params": { 00:25:30.869 "discovery_filter": "match_any", 00:25:30.869 "admin_cmd_passthru": { 00:25:30.869 "identify_ctrlr": false 00:25:30.869 }, 00:25:30.869 "dhchap_digests": [ 00:25:30.869 "sha256", 00:25:30.869 "sha384", 00:25:30.869 "sha512" 00:25:30.870 ], 00:25:30.870 "dhchap_dhgroups": [ 00:25:30.870 "null", 00:25:30.870 "ffdhe2048", 00:25:30.870 "ffdhe3072", 00:25:30.870 "ffdhe4096", 00:25:30.870 "ffdhe6144", 00:25:30.870 "ffdhe8192" 00:25:30.870 ] 00:25:30.870 } 00:25:30.870 }, 00:25:30.870 { 00:25:30.870 "method": "nvmf_set_max_subsystems", 00:25:30.870 "params": { 00:25:30.870 "max_subsystems": 1024 00:25:30.870 } 00:25:30.870 }, 00:25:30.870 { 00:25:30.870 "method": "nvmf_set_crdt", 00:25:30.870 "params": { 00:25:30.870 "crdt1": 0, 00:25:30.870 "crdt2": 0, 00:25:30.870 "crdt3": 0 00:25:30.870 } 00:25:30.870 }, 00:25:30.870 { 00:25:30.870 "method": "nvmf_create_transport", 00:25:30.870 "params": { 00:25:30.870 "trtype": "TCP", 00:25:30.870 "max_queue_depth": 128, 00:25:30.870 "max_io_qpairs_per_ctrlr": 127, 00:25:30.870 "in_capsule_data_size": 4096, 00:25:30.870 "max_io_size": 131072, 00:25:30.870 "io_unit_size": 131072, 00:25:30.870 "max_aq_depth": 128, 00:25:30.870 "num_shared_buffers": 511, 00:25:30.870 "buf_cache_size": 4294967295, 00:25:30.870 "dif_insert_or_strip": false, 00:25:30.870 "zcopy": false, 00:25:30.870 "c2h_success": false, 00:25:30.870 "sock_priority": 0, 00:25:30.870 "abort_timeout_sec": 1, 00:25:30.870 "ack_timeout": 0, 00:25:30.870 "data_wr_pool_size": 0 00:25:30.870 } 00:25:30.870 }, 00:25:30.870 { 00:25:30.870 "method": "nvmf_create_subsystem", 00:25:30.870 "params": { 00:25:30.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.870 "allow_any_host": false, 00:25:30.870 "serial_number": "SPDK00000000000001", 00:25:30.870 "model_number": "SPDK bdev Controller", 00:25:30.870 "max_namespaces": 10, 00:25:30.870 "min_cntlid": 1, 00:25:30.870 "max_cntlid": 65519, 00:25:30.870 "ana_reporting": false 00:25:30.870 } 00:25:30.870 }, 00:25:30.870 { 00:25:30.870 "method": "nvmf_subsystem_add_host", 00:25:30.870 "params": { 00:25:30.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.870 "host": "nqn.2016-06.io.spdk:host1", 00:25:30.870 "psk": "key0" 00:25:30.870 } 00:25:30.870 }, 00:25:30.870 { 00:25:30.870 "method": "nvmf_subsystem_add_ns", 00:25:30.870 "params": { 00:25:30.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.870 "namespace": { 00:25:30.870 "nsid": 1, 00:25:30.870 "bdev_name": "malloc0", 00:25:30.870 "nguid": "F635D240B2F6460DA58D44DE7C40A602", 00:25:30.870 "uuid": "f635d240-b2f6-460d-a58d-44de7c40a602", 00:25:30.870 "no_auto_visible": false 00:25:30.870 } 00:25:30.870 } 00:25:30.870 }, 00:25:30.870 { 00:25:30.870 "method": "nvmf_subsystem_add_listener", 00:25:30.870 "params": { 00:25:30.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.870 "listen_address": { 00:25:30.870 "trtype": "TCP", 00:25:30.870 "adrfam": "IPv4", 00:25:30.870 "traddr": "10.0.0.2", 00:25:30.870 "trsvcid": "4420" 00:25:30.870 }, 00:25:30.870 "secure_channel": true 00:25:30.870 } 00:25:30.870 } 00:25:30.870 ] 00:25:30.870 } 00:25:30.870 ] 00:25:30.870 }' 00:25:30.870 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:31.131 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # bdevperfconf='{ 00:25:31.131 "subsystems": [ 00:25:31.131 { 00:25:31.131 "subsystem": "keyring", 00:25:31.131 "config": [ 00:25:31.131 { 00:25:31.131 "method": "keyring_file_add_key", 00:25:31.131 "params": { 00:25:31.131 "name": "key0", 00:25:31.131 "path": "/tmp/tmp.Sbd8on77ik" 00:25:31.131 } 00:25:31.131 } 00:25:31.131 ] 00:25:31.131 }, 00:25:31.131 { 00:25:31.131 "subsystem": "iobuf", 00:25:31.131 "config": [ 00:25:31.131 { 00:25:31.131 "method": "iobuf_set_options", 00:25:31.131 "params": { 00:25:31.131 "small_pool_count": 8192, 00:25:31.131 "large_pool_count": 1024, 00:25:31.131 "small_bufsize": 8192, 00:25:31.131 "large_bufsize": 135168, 00:25:31.131 "enable_numa": false 00:25:31.131 } 00:25:31.131 } 00:25:31.131 ] 00:25:31.131 }, 00:25:31.131 { 00:25:31.131 "subsystem": "sock", 00:25:31.131 "config": [ 00:25:31.131 { 00:25:31.131 "method": "sock_set_default_impl", 00:25:31.131 "params": { 00:25:31.131 "impl_name": "posix" 00:25:31.131 } 00:25:31.131 }, 00:25:31.131 { 00:25:31.131 "method": "sock_impl_set_options", 00:25:31.131 "params": { 00:25:31.131 "impl_name": "ssl", 00:25:31.131 "recv_buf_size": 4096, 00:25:31.131 "send_buf_size": 4096, 00:25:31.131 "enable_recv_pipe": true, 00:25:31.131 "enable_quickack": false, 00:25:31.131 "enable_placement_id": 0, 00:25:31.131 "enable_zerocopy_send_server": true, 00:25:31.131 "enable_zerocopy_send_client": false, 00:25:31.131 "zerocopy_threshold": 0, 00:25:31.131 "tls_version": 0, 00:25:31.131 "enable_ktls": false 00:25:31.131 } 00:25:31.131 }, 00:25:31.131 { 00:25:31.131 "method": "sock_impl_set_options", 00:25:31.131 "params": { 00:25:31.131 "impl_name": "posix", 00:25:31.131 "recv_buf_size": 2097152, 00:25:31.131 "send_buf_size": 2097152, 00:25:31.131 "enable_recv_pipe": true, 00:25:31.131 "enable_quickack": false, 00:25:31.131 "enable_placement_id": 0, 00:25:31.131 "enable_zerocopy_send_server": true, 00:25:31.131 "enable_zerocopy_send_client": false, 00:25:31.131 "zerocopy_threshold": 0, 00:25:31.131 "tls_version": 0, 00:25:31.131 "enable_ktls": false 00:25:31.131 } 00:25:31.131 } 00:25:31.132 ] 00:25:31.132 }, 00:25:31.132 { 00:25:31.132 "subsystem": "vmd", 00:25:31.132 "config": [] 00:25:31.132 }, 00:25:31.132 { 00:25:31.132 "subsystem": "accel", 00:25:31.132 "config": [ 00:25:31.132 { 00:25:31.132 "method": "accel_set_options", 00:25:31.132 "params": { 00:25:31.132 "small_cache_size": 128, 00:25:31.132 "large_cache_size": 16, 00:25:31.132 "task_count": 2048, 00:25:31.132 "sequence_count": 2048, 00:25:31.132 "buf_count": 2048 00:25:31.132 } 00:25:31.132 } 00:25:31.132 ] 00:25:31.132 }, 00:25:31.132 { 00:25:31.132 "subsystem": "bdev", 00:25:31.132 "config": [ 00:25:31.132 { 00:25:31.132 "method": "bdev_set_options", 00:25:31.132 "params": { 00:25:31.132 "bdev_io_pool_size": 65535, 00:25:31.132 "bdev_io_cache_size": 256, 00:25:31.132 "bdev_auto_examine": true, 00:25:31.132 "iobuf_small_cache_size": 128, 00:25:31.132 "iobuf_large_cache_size": 16 00:25:31.132 } 00:25:31.132 }, 00:25:31.132 { 00:25:31.132 "method": "bdev_raid_set_options", 00:25:31.132 "params": { 00:25:31.132 "process_window_size_kb": 1024, 00:25:31.132 "process_max_bandwidth_mb_sec": 0 00:25:31.132 } 00:25:31.132 }, 00:25:31.132 { 00:25:31.132 "method": "bdev_iscsi_set_options", 00:25:31.132 "params": { 00:25:31.132 "timeout_sec": 30 00:25:31.132 } 00:25:31.132 }, 00:25:31.132 { 00:25:31.132 "method": "bdev_nvme_set_options", 00:25:31.132 "params": { 00:25:31.132 "action_on_timeout": "none", 00:25:31.132 "timeout_us": 0, 00:25:31.132 "timeout_admin_us": 0, 00:25:31.132 "keep_alive_timeout_ms": 10000, 00:25:31.132 "arbitration_burst": 0, 00:25:31.132 "low_priority_weight": 0, 00:25:31.132 "medium_priority_weight": 0, 00:25:31.132 "high_priority_weight": 0, 00:25:31.132 "nvme_adminq_poll_period_us": 10000, 00:25:31.132 "nvme_ioq_poll_period_us": 0, 00:25:31.132 "io_queue_requests": 512, 00:25:31.132 "delay_cmd_submit": true, 00:25:31.132 "transport_retry_count": 4, 00:25:31.132 "bdev_retry_count": 3, 00:25:31.132 "transport_ack_timeout": 0, 00:25:31.132 "ctrlr_loss_timeout_sec": 0, 00:25:31.132 "reconnect_delay_sec": 0, 00:25:31.132 "fast_io_fail_timeout_sec": 0, 00:25:31.132 "disable_auto_failback": false, 00:25:31.132 "generate_uuids": false, 00:25:31.132 "transport_tos": 0, 00:25:31.132 "nvme_error_stat": false, 00:25:31.132 "rdma_srq_size": 0, 00:25:31.132 "io_path_stat": false, 00:25:31.132 "allow_accel_sequence": false, 00:25:31.132 "rdma_max_cq_size": 0, 00:25:31.132 "rdma_cm_event_timeout_ms": 0, 00:25:31.132 "dhchap_digests": [ 00:25:31.132 "sha256", 00:25:31.132 "sha384", 00:25:31.132 "sha512" 00:25:31.132 ], 00:25:31.132 "dhchap_dhgroups": [ 00:25:31.132 "null", 00:25:31.132 "ffdhe2048", 00:25:31.132 "ffdhe3072", 00:25:31.132 "ffdhe4096", 00:25:31.132 "ffdhe6144", 00:25:31.132 "ffdhe8192" 00:25:31.132 ] 00:25:31.132 } 00:25:31.132 }, 00:25:31.132 { 00:25:31.132 "method": "bdev_nvme_attach_controller", 00:25:31.132 "params": { 00:25:31.132 "name": "TLSTEST", 00:25:31.132 "trtype": "TCP", 00:25:31.132 "adrfam": "IPv4", 00:25:31.132 "traddr": "10.0.0.2", 00:25:31.132 "trsvcid": "4420", 00:25:31.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.132 "prchk_reftag": false, 00:25:31.132 "prchk_guard": false, 00:25:31.132 "ctrlr_loss_timeout_sec": 0, 00:25:31.132 "reconnect_delay_sec": 0, 00:25:31.132 "fast_io_fail_timeout_sec": 0, 00:25:31.132 "psk": "key0", 00:25:31.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:31.132 "hdgst": false, 00:25:31.132 "ddgst": false, 00:25:31.132 "multipath": "multipath" 00:25:31.132 } 00:25:31.132 }, 00:25:31.132 { 00:25:31.132 "method": "bdev_nvme_set_hotplug", 00:25:31.132 "params": { 00:25:31.132 "period_us": 100000, 00:25:31.132 "enable": false 00:25:31.132 } 00:25:31.132 }, 00:25:31.132 { 00:25:31.132 "method": "bdev_wait_for_examine" 00:25:31.132 } 00:25:31.132 ] 00:25:31.132 }, 00:25:31.132 { 00:25:31.132 "subsystem": "nbd", 00:25:31.132 "config": [] 00:25:31.132 } 00:25:31.132 ] 00:25:31.132 }' 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # killprocess 2027487 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2027487 ']' 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2027487 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2027487 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2027487' 00:25:31.132 killing process with pid 2027487 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2027487 00:25:31.132 Received shutdown signal, test time was about 10.000000 seconds 00:25:31.132 00:25:31.132 Latency(us) 00:25:31.132 [2024-11-20T07:22:35.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.132 [2024-11-20T07:22:35.861Z] =================================================================================================================== 00:25:31.132 [2024-11-20T07:22:35.861Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2027487 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # killprocess 2027000 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2027000 ']' 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2027000 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2027000 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:31.132 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:31.133 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2027000' 00:25:31.133 killing process with pid 2027000 00:25:31.133 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2027000 00:25:31.133 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2027000 00:25:31.394 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:31.394 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:31.394 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:31.394 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:31.394 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # echo '{ 00:25:31.394 "subsystems": [ 00:25:31.394 { 00:25:31.394 "subsystem": "keyring", 00:25:31.394 "config": [ 00:25:31.394 { 00:25:31.394 "method": "keyring_file_add_key", 00:25:31.394 "params": { 00:25:31.394 "name": "key0", 00:25:31.394 "path": "/tmp/tmp.Sbd8on77ik" 00:25:31.394 } 00:25:31.394 } 00:25:31.394 ] 00:25:31.394 }, 00:25:31.394 { 00:25:31.394 "subsystem": "iobuf", 00:25:31.394 "config": [ 00:25:31.394 { 00:25:31.394 "method": "iobuf_set_options", 00:25:31.394 "params": { 00:25:31.394 "small_pool_count": 8192, 00:25:31.394 "large_pool_count": 1024, 00:25:31.394 "small_bufsize": 8192, 00:25:31.394 "large_bufsize": 135168, 00:25:31.394 "enable_numa": false 00:25:31.394 } 00:25:31.394 } 00:25:31.394 ] 00:25:31.394 }, 00:25:31.394 { 00:25:31.394 "subsystem": "sock", 00:25:31.394 "config": [ 00:25:31.394 { 00:25:31.394 "method": "sock_set_default_impl", 00:25:31.394 "params": { 00:25:31.394 "impl_name": "posix" 00:25:31.394 } 00:25:31.394 }, 00:25:31.394 { 00:25:31.394 "method": "sock_impl_set_options", 00:25:31.394 "params": { 00:25:31.394 "impl_name": "ssl", 00:25:31.394 "recv_buf_size": 4096, 00:25:31.394 "send_buf_size": 4096, 00:25:31.394 "enable_recv_pipe": true, 00:25:31.394 "enable_quickack": false, 00:25:31.394 "enable_placement_id": 0, 00:25:31.394 "enable_zerocopy_send_server": true, 00:25:31.394 "enable_zerocopy_send_client": false, 00:25:31.394 "zerocopy_threshold": 0, 00:25:31.394 "tls_version": 0, 00:25:31.394 "enable_ktls": false 00:25:31.394 } 00:25:31.394 }, 00:25:31.394 { 00:25:31.394 "method": "sock_impl_set_options", 00:25:31.394 "params": { 00:25:31.394 "impl_name": "posix", 00:25:31.394 "recv_buf_size": 2097152, 00:25:31.394 "send_buf_size": 2097152, 00:25:31.394 "enable_recv_pipe": true, 00:25:31.394 "enable_quickack": false, 00:25:31.394 "enable_placement_id": 0, 00:25:31.394 "enable_zerocopy_send_server": true, 00:25:31.394 "enable_zerocopy_send_client": false, 00:25:31.394 "zerocopy_threshold": 0, 00:25:31.394 "tls_version": 0, 00:25:31.394 "enable_ktls": false 00:25:31.394 } 00:25:31.394 } 00:25:31.394 ] 00:25:31.394 }, 00:25:31.394 { 00:25:31.394 "subsystem": "vmd", 00:25:31.394 "config": [] 00:25:31.394 }, 00:25:31.394 { 00:25:31.394 "subsystem": "accel", 00:25:31.394 "config": [ 00:25:31.394 { 00:25:31.394 "method": "accel_set_options", 00:25:31.394 "params": { 00:25:31.394 "small_cache_size": 128, 00:25:31.394 "large_cache_size": 16, 00:25:31.394 "task_count": 2048, 00:25:31.394 "sequence_count": 2048, 00:25:31.394 "buf_count": 2048 00:25:31.394 } 00:25:31.394 } 00:25:31.394 ] 00:25:31.394 }, 00:25:31.394 { 00:25:31.394 "subsystem": "bdev", 00:25:31.394 "config": [ 00:25:31.394 { 00:25:31.394 "method": "bdev_set_options", 00:25:31.394 "params": { 00:25:31.394 "bdev_io_pool_size": 65535, 00:25:31.394 "bdev_io_cache_size": 256, 00:25:31.394 "bdev_auto_examine": true, 00:25:31.394 "iobuf_small_cache_size": 128, 00:25:31.394 "iobuf_large_cache_size": 16 00:25:31.394 } 00:25:31.394 }, 00:25:31.394 { 00:25:31.394 "method": "bdev_raid_set_options", 00:25:31.394 "params": { 00:25:31.394 "process_window_size_kb": 1024, 00:25:31.394 "process_max_bandwidth_mb_sec": 0 00:25:31.394 } 00:25:31.394 }, 00:25:31.394 { 00:25:31.394 "method": "bdev_iscsi_set_options", 00:25:31.394 "params": { 00:25:31.394 "timeout_sec": 30 00:25:31.394 } 00:25:31.394 }, 00:25:31.394 { 00:25:31.394 "method": "bdev_nvme_set_options", 00:25:31.394 "params": { 00:25:31.394 "action_on_timeout": "none", 00:25:31.394 "timeout_us": 0, 00:25:31.394 "timeout_admin_us": 0, 00:25:31.394 "keep_alive_timeout_ms": 10000, 00:25:31.394 "arbitration_burst": 0, 00:25:31.394 "low_priority_weight": 0, 00:25:31.394 "medium_priority_weight": 0, 00:25:31.394 "high_priority_weight": 0, 00:25:31.394 "nvme_adminq_poll_period_us": 10000, 00:25:31.394 "nvme_ioq_poll_period_us": 0, 00:25:31.394 "io_queue_requests": 0, 00:25:31.394 "delay_cmd_submit": true, 00:25:31.394 "transport_retry_count": 4, 00:25:31.394 "bdev_retry_count": 3, 00:25:31.394 "transport_ack_timeout": 0, 00:25:31.394 "ctrlr_loss_timeout_sec": 0, 00:25:31.394 "reconnect_delay_sec": 0, 00:25:31.394 "fast_io_fail_timeout_sec": 0, 00:25:31.394 "disable_auto_failback": false, 00:25:31.394 "generate_uuids": false, 00:25:31.394 "transport_tos": 0, 00:25:31.394 "nvme_error_stat": false, 00:25:31.394 "rdma_srq_size": 0, 00:25:31.394 "io_path_stat": false, 00:25:31.394 "allow_accel_sequence": false, 00:25:31.394 "rdma_max_cq_size": 0, 00:25:31.394 "rdma_cm_event_timeout_ms": 0, 00:25:31.395 "dhchap_digests": [ 00:25:31.395 "sha256", 00:25:31.395 "sha384", 00:25:31.395 "sha512" 00:25:31.395 ], 00:25:31.395 "dhchap_dhgroups": [ 00:25:31.395 "null", 00:25:31.395 "ffdhe2048", 00:25:31.395 "ffdhe3072", 00:25:31.395 "ffdhe4096", 00:25:31.395 "ffdhe6144", 00:25:31.395 "ffdhe8192" 00:25:31.395 ] 00:25:31.395 } 00:25:31.395 }, 00:25:31.395 { 00:25:31.395 "method": "bdev_nvme_set_hotplug", 00:25:31.395 "params": { 00:25:31.395 "period_us": 100000, 00:25:31.395 "enable": false 00:25:31.395 } 00:25:31.395 }, 00:25:31.395 { 00:25:31.395 "method": "bdev_malloc_create", 00:25:31.395 "params": { 00:25:31.395 "name": "malloc0", 00:25:31.395 "num_blocks": 8192, 00:25:31.395 "block_size": 4096, 00:25:31.395 "physical_block_size": 4096, 00:25:31.395 "uuid": "f635d240-b2f6-460d-a58d-44de7c40a602", 00:25:31.395 "optimal_io_boundary": 0, 00:25:31.395 "md_size": 0, 00:25:31.395 "dif_type": 0, 00:25:31.395 "dif_is_head_of_md": false, 00:25:31.395 "dif_pi_format": 0 00:25:31.395 } 00:25:31.395 }, 00:25:31.395 { 00:25:31.395 "method": "bdev_wait_for_examine" 00:25:31.395 } 00:25:31.395 ] 00:25:31.395 }, 00:25:31.395 { 00:25:31.395 "subsystem": "nbd", 00:25:31.395 "config": [] 00:25:31.395 }, 00:25:31.395 { 00:25:31.395 "subsystem": "scheduler", 00:25:31.395 "config": [ 00:25:31.395 { 00:25:31.395 "method": "framework_set_scheduler", 00:25:31.395 "params": { 00:25:31.395 "name": "static" 00:25:31.395 } 00:25:31.395 } 00:25:31.395 ] 00:25:31.395 }, 00:25:31.395 { 00:25:31.395 "subsystem": "nvmf", 00:25:31.395 "config": [ 00:25:31.395 { 00:25:31.395 "method": "nvmf_set_config", 00:25:31.395 "params": { 00:25:31.395 "discovery_filter": "match_any", 00:25:31.395 "admin_cmd_passthru": { 00:25:31.395 "identify_ctrlr": false 00:25:31.395 }, 00:25:31.395 "dhchap_digests": [ 00:25:31.395 "sha256", 00:25:31.395 "sha384", 00:25:31.395 "sha512" 00:25:31.395 ], 00:25:31.395 "dhchap_dhgroups": [ 00:25:31.395 "null", 00:25:31.395 "ffdhe2048", 00:25:31.395 "ffdhe3072", 00:25:31.395 "ffdhe4096", 00:25:31.395 "ffdhe6144", 00:25:31.395 "ffdhe8192" 00:25:31.395 ] 00:25:31.395 } 00:25:31.395 }, 00:25:31.395 { 00:25:31.395 "method": "nvmf_set_max_subsystems", 00:25:31.395 "params": { 00:25:31.395 "max_subsystems": 1024 00:25:31.395 } 00:25:31.395 }, 00:25:31.395 { 00:25:31.395 "method": "nvmf_set_crdt", 00:25:31.395 "params": { 00:25:31.395 "crdt1": 0, 00:25:31.395 "crdt2": 0, 00:25:31.395 "crdt3": 0 00:25:31.395 } 00:25:31.395 }, 00:25:31.395 { 00:25:31.395 "method": "nvmf_create_transport", 00:25:31.395 "params": { 00:25:31.395 "trtype": "TCP", 00:25:31.395 "max_queue_depth": 128, 00:25:31.395 "max_io_qpairs_per_ctrlr": 127, 00:25:31.395 "in_capsule_data_size": 4096, 00:25:31.395 "max_io_size": 131072, 00:25:31.395 "io_unit_size": 131072, 00:25:31.395 "max_aq_depth": 128, 00:25:31.395 "num_shared_buffers": 511, 00:25:31.395 "buf_cache_size": 4294967295, 00:25:31.395 "dif_insert_or_strip": false, 00:25:31.395 "zcopy": false, 00:25:31.395 "c2h_success": false, 00:25:31.395 "sock_priority": 0, 00:25:31.395 "abort_timeout_sec": 1, 00:25:31.395 "ack_timeout": 0, 00:25:31.395 "data_wr_pool_size": 0 00:25:31.395 } 00:25:31.395 }, 00:25:31.395 { 00:25:31.395 "method": "nvmf_create_subsystem", 00:25:31.395 "params": { 00:25:31.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.395 "allow_any_host": false, 00:25:31.395 "serial_number": "SPDK00000000000001", 00:25:31.395 "model_number": "SPDK bdev Controller", 00:25:31.395 "max_namespaces": 10, 00:25:31.395 "min_cntlid": 1, 00:25:31.395 "max_cntlid": 65519, 00:25:31.395 "ana_reporting": false 00:25:31.395 } 00:25:31.395 }, 00:25:31.395 { 00:25:31.395 "method": "nvmf_subsystem_add_host", 00:25:31.395 "params": { 00:25:31.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.395 "host": "nqn.2016-06.io.spdk:host1", 00:25:31.395 "psk": "key0" 00:25:31.395 } 00:25:31.395 }, 00:25:31.395 { 00:25:31.395 "method": "nvmf_subsystem_add_ns", 00:25:31.395 "params": { 00:25:31.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.395 "namespace": { 00:25:31.395 "nsid": 1, 00:25:31.395 "bdev_name": "malloc0", 00:25:31.395 "nguid": "F635D240B2F6460DA58D44DE7C40A602", 00:25:31.395 "uuid": "f635d240-b2f6-460d-a58d-44de7c40a602", 00:25:31.395 "no_auto_visible": false 00:25:31.395 } 00:25:31.395 } 00:25:31.395 }, 00:25:31.395 { 00:25:31.395 "method": "nvmf_subsystem_add_listener", 00:25:31.395 "params": { 00:25:31.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.395 "listen_address": { 00:25:31.395 "trtype": "TCP", 00:25:31.395 "adrfam": "IPv4", 00:25:31.395 "traddr": "10.0.0.2", 00:25:31.395 "trsvcid": "4420" 00:25:31.395 }, 00:25:31.395 "secure_channel": true 00:25:31.395 } 00:25:31.395 } 00:25:31.395 ] 00:25:31.395 } 00:25:31.395 ] 00:25:31.395 }' 00:25:31.395 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2027845 00:25:31.395 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2027845 00:25:31.395 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:31.395 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2027845 ']' 00:25:31.395 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.395 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.395 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.395 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.395 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:31.395 [2024-11-20 08:22:36.024968] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:31.395 [2024-11-20 08:22:36.025027] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.395 [2024-11-20 08:22:36.119927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.656 [2024-11-20 08:22:36.149259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.656 [2024-11-20 08:22:36.149287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.656 [2024-11-20 08:22:36.149293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.656 [2024-11-20 08:22:36.149298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.656 [2024-11-20 08:22:36.149302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.656 [2024-11-20 08:22:36.149816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.656 [2024-11-20 08:22:36.342417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.656 [2024-11-20 08:22:36.374445] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:31.656 [2024-11-20 08:22:36.374643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # bdevperf_pid=2027967 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # waitforlisten 2027967 /var/tmp/bdevperf.sock 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2027967 ']' 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:32.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:32.228 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # echo '{ 00:25:32.228 "subsystems": [ 00:25:32.228 { 00:25:32.228 "subsystem": "keyring", 00:25:32.228 "config": [ 00:25:32.228 { 00:25:32.228 "method": "keyring_file_add_key", 00:25:32.228 "params": { 00:25:32.228 "name": "key0", 00:25:32.228 "path": "/tmp/tmp.Sbd8on77ik" 00:25:32.228 } 00:25:32.228 } 00:25:32.228 ] 00:25:32.228 }, 00:25:32.228 { 00:25:32.228 "subsystem": "iobuf", 00:25:32.228 "config": [ 00:25:32.228 { 00:25:32.228 "method": "iobuf_set_options", 00:25:32.228 "params": { 00:25:32.228 "small_pool_count": 8192, 00:25:32.228 "large_pool_count": 1024, 00:25:32.228 "small_bufsize": 8192, 00:25:32.228 "large_bufsize": 135168, 00:25:32.228 "enable_numa": false 00:25:32.228 } 00:25:32.228 } 00:25:32.228 ] 00:25:32.228 }, 00:25:32.228 { 00:25:32.228 "subsystem": "sock", 00:25:32.228 "config": [ 00:25:32.228 { 00:25:32.228 "method": "sock_set_default_impl", 00:25:32.228 "params": { 00:25:32.228 "impl_name": "posix" 00:25:32.228 } 00:25:32.228 }, 00:25:32.228 { 00:25:32.228 "method": "sock_impl_set_options", 00:25:32.228 "params": { 00:25:32.228 "impl_name": "ssl", 00:25:32.229 "recv_buf_size": 4096, 00:25:32.229 "send_buf_size": 4096, 00:25:32.229 "enable_recv_pipe": true, 00:25:32.229 "enable_quickack": false, 00:25:32.229 "enable_placement_id": 0, 00:25:32.229 "enable_zerocopy_send_server": true, 00:25:32.229 "enable_zerocopy_send_client": false, 00:25:32.229 "zerocopy_threshold": 0, 00:25:32.229 "tls_version": 0, 00:25:32.229 "enable_ktls": false 00:25:32.229 } 00:25:32.229 }, 00:25:32.229 { 00:25:32.229 "method": "sock_impl_set_options", 00:25:32.229 "params": { 00:25:32.229 "impl_name": "posix", 00:25:32.229 "recv_buf_size": 2097152, 00:25:32.229 "send_buf_size": 2097152, 00:25:32.229 "enable_recv_pipe": true, 00:25:32.229 "enable_quickack": false, 00:25:32.229 "enable_placement_id": 0, 00:25:32.229 "enable_zerocopy_send_server": true, 00:25:32.229 "enable_zerocopy_send_client": false, 00:25:32.229 "zerocopy_threshold": 0, 00:25:32.229 "tls_version": 0, 00:25:32.229 "enable_ktls": false 00:25:32.229 } 00:25:32.229 } 00:25:32.229 ] 00:25:32.229 }, 00:25:32.229 { 00:25:32.229 "subsystem": "vmd", 00:25:32.229 "config": [] 00:25:32.229 }, 00:25:32.229 { 00:25:32.229 "subsystem": "accel", 00:25:32.229 "config": [ 00:25:32.229 { 00:25:32.229 "method": "accel_set_options", 00:25:32.229 "params": { 00:25:32.229 "small_cache_size": 128, 00:25:32.229 "large_cache_size": 16, 00:25:32.229 "task_count": 2048, 00:25:32.229 "sequence_count": 2048, 00:25:32.229 "buf_count": 2048 00:25:32.229 } 00:25:32.229 } 00:25:32.229 ] 00:25:32.229 }, 00:25:32.229 { 00:25:32.229 "subsystem": "bdev", 00:25:32.229 "config": [ 00:25:32.229 { 00:25:32.229 "method": "bdev_set_options", 00:25:32.229 "params": { 00:25:32.229 "bdev_io_pool_size": 65535, 00:25:32.229 "bdev_io_cache_size": 256, 00:25:32.229 "bdev_auto_examine": true, 00:25:32.229 "iobuf_small_cache_size": 128, 00:25:32.229 "iobuf_large_cache_size": 16 00:25:32.229 } 00:25:32.229 }, 00:25:32.229 { 00:25:32.229 "method": "bdev_raid_set_options", 00:25:32.229 "params": { 00:25:32.229 "process_window_size_kb": 1024, 00:25:32.229 "process_max_bandwidth_mb_sec": 0 00:25:32.229 } 00:25:32.229 }, 00:25:32.229 { 00:25:32.229 "method": "bdev_iscsi_set_options", 00:25:32.229 "params": { 00:25:32.229 "timeout_sec": 30 00:25:32.229 } 00:25:32.229 }, 00:25:32.229 { 00:25:32.229 "method": "bdev_nvme_set_options", 00:25:32.229 "params": { 00:25:32.229 "action_on_timeout": "none", 00:25:32.229 "timeout_us": 0, 00:25:32.229 "timeout_admin_us": 0, 00:25:32.229 "keep_alive_timeout_ms": 10000, 00:25:32.229 "arbitration_burst": 0, 00:25:32.229 "low_priority_weight": 0, 00:25:32.229 "medium_priority_weight": 0, 00:25:32.229 "high_priority_weight": 0, 00:25:32.229 "nvme_adminq_poll_period_us": 10000, 00:25:32.229 "nvme_ioq_poll_period_us": 0, 00:25:32.229 "io_queue_requests": 512, 00:25:32.229 "delay_cmd_submit": true, 00:25:32.229 "transport_retry_count": 4, 00:25:32.229 "bdev_retry_count": 3, 00:25:32.229 "transport_ack_timeout": 0, 00:25:32.229 "ctrlr_loss_timeout_sec": 0, 00:25:32.229 "reconnect_delay_sec": 0, 00:25:32.229 "fast_io_fail_timeout_sec": 0, 00:25:32.229 "disable_auto_failback": false, 00:25:32.229 "generate_uuids": false, 00:25:32.229 "transport_tos": 0, 00:25:32.229 "nvme_error_stat": false, 00:25:32.229 "rdma_srq_size": 0, 00:25:32.229 "io_path_stat": false, 00:25:32.229 "allow_accel_sequence": false, 00:25:32.229 "rdma_max_cq_size": 0, 00:25:32.229 "rdma_cm_event_timeout_ms": 0, 00:25:32.229 "dhchap_digests": [ 00:25:32.229 "sha256", 00:25:32.229 "sha384", 00:25:32.229 "sha512" 00:25:32.229 ], 00:25:32.229 "dhchap_dhgroups": [ 00:25:32.229 "null", 00:25:32.229 "ffdhe2048", 00:25:32.229 "ffdhe3072", 00:25:32.229 "ffdhe4096", 00:25:32.229 "ffdhe6144", 00:25:32.229 "ffdhe8192" 00:25:32.229 ] 00:25:32.229 } 00:25:32.229 }, 00:25:32.229 { 00:25:32.229 "method": "bdev_nvme_attach_controller", 00:25:32.229 "params": { 00:25:32.229 "name": "TLSTEST", 00:25:32.229 "trtype": "TCP", 00:25:32.229 "adrfam": "IPv4", 00:25:32.229 "traddr": "10.0.0.2", 00:25:32.229 "trsvcid": "4420", 00:25:32.229 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.229 "prchk_reftag": false, 00:25:32.229 "prchk_guard": false, 00:25:32.229 "ctrlr_loss_timeout_sec": 0, 00:25:32.229 "reconnect_delay_sec": 0, 00:25:32.229 "fast_io_fail_timeout_sec": 0, 00:25:32.229 "psk": "key0", 00:25:32.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:32.229 "hdgst": false, 00:25:32.229 "ddgst": false, 00:25:32.229 "multipath": "multipath" 00:25:32.229 } 00:25:32.229 }, 00:25:32.229 { 00:25:32.229 "method": "bdev_nvme_set_hotplug", 00:25:32.229 "params": { 00:25:32.229 "period_us": 100000, 00:25:32.229 "enable": false 00:25:32.229 } 00:25:32.229 }, 00:25:32.229 { 00:25:32.229 "method": "bdev_wait_for_examine" 00:25:32.229 } 00:25:32.229 ] 00:25:32.229 }, 00:25:32.229 { 00:25:32.229 "subsystem": "nbd", 00:25:32.229 "config": [] 00:25:32.229 } 00:25:32.229 ] 00:25:32.229 }' 00:25:32.229 [2024-11-20 08:22:36.903235] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:32.230 [2024-11-20 08:22:36.903289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2027967 ] 00:25:32.490 [2024-11-20 08:22:36.966510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.490 [2024-11-20 08:22:36.995637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.490 [2024-11-20 08:22:37.129834] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:33.060 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.060 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:33.060 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:33.060 Running I/O for 10 seconds... 00:25:35.398 4984.00 IOPS, 19.47 MiB/s [2024-11-20T07:22:41.069Z] 5109.00 IOPS, 19.96 MiB/s [2024-11-20T07:22:42.009Z] 5408.67 IOPS, 21.13 MiB/s [2024-11-20T07:22:42.952Z] 5650.25 IOPS, 22.07 MiB/s [2024-11-20T07:22:43.893Z] 5557.40 IOPS, 21.71 MiB/s [2024-11-20T07:22:44.833Z] 5596.67 IOPS, 21.86 MiB/s [2024-11-20T07:22:46.215Z] 5686.86 IOPS, 22.21 MiB/s [2024-11-20T07:22:47.156Z] 5697.62 IOPS, 22.26 MiB/s [2024-11-20T07:22:48.099Z] 5657.78 IOPS, 22.10 MiB/s [2024-11-20T07:22:48.099Z] 5576.10 IOPS, 21.78 MiB/s 00:25:43.370 Latency(us) 00:25:43.370 [2024-11-20T07:22:48.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.370 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:43.370 Verification LBA range: start 0x0 length 0x2000 00:25:43.370 TLSTESTn1 : 10.01 5582.44 21.81 0.00 0.00 22899.08 4450.99 29709.65 00:25:43.370 [2024-11-20T07:22:48.099Z] =================================================================================================================== 00:25:43.370 [2024-11-20T07:22:48.099Z] Total : 5582.44 21.81 0.00 0.00 22899.08 4450.99 29709.65 00:25:43.370 { 00:25:43.370 "results": [ 00:25:43.370 { 00:25:43.370 "job": "TLSTESTn1", 00:25:43.370 "core_mask": "0x4", 00:25:43.370 "workload": "verify", 00:25:43.370 "status": "finished", 00:25:43.370 "verify_range": { 00:25:43.370 "start": 0, 00:25:43.370 "length": 8192 00:25:43.370 }, 00:25:43.370 "queue_depth": 128, 00:25:43.370 "io_size": 4096, 00:25:43.370 "runtime": 10.011208, 00:25:43.370 "iops": 5582.443197664058, 00:25:43.370 "mibps": 21.806418740875227, 00:25:43.370 "io_failed": 0, 00:25:43.370 "io_timeout": 0, 00:25:43.370 "avg_latency_us": 22899.079360375996, 00:25:43.370 "min_latency_us": 4450.986666666667, 00:25:43.370 "max_latency_us": 29709.653333333332 00:25:43.370 } 00:25:43.370 ], 00:25:43.370 "core_count": 1 00:25:43.370 } 00:25:43.370 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:43.370 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # killprocess 2027967 00:25:43.370 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2027967 ']' 00:25:43.370 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2027967 00:25:43.370 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:43.370 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.370 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2027967 00:25:43.370 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:43.370 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:43.370 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2027967' 00:25:43.370 killing process with pid 2027967 00:25:43.370 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2027967 00:25:43.370 Received shutdown signal, test time was about 10.000000 seconds 00:25:43.370 00:25:43.370 Latency(us) 00:25:43.370 [2024-11-20T07:22:48.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.370 [2024-11-20T07:22:48.099Z] =================================================================================================================== 00:25:43.370 [2024-11-20T07:22:48.099Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:43.370 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2027967 00:25:43.370 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@212 -- # killprocess 2027845 00:25:43.370 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2027845 ']' 00:25:43.370 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2027845 00:25:43.370 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:43.370 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.370 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2027845 00:25:43.370 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:43.370 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:43.370 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2027845' 00:25:43.370 killing process with pid 2027845 00:25:43.370 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2027845 00:25:43.370 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2027845 00:25:43.631 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # nvmfappstart 00:25:43.631 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:43.631 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:43.631 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:43.631 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2030217 00:25:43.631 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2030217 00:25:43.631 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:43.631 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2030217 ']' 00:25:43.631 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.631 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.631 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.631 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.631 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:43.631 [2024-11-20 08:22:48.238432] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:43.631 [2024-11-20 08:22:48.238487] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.631 [2024-11-20 08:22:48.324402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.892 [2024-11-20 08:22:48.358970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.892 [2024-11-20 08:22:48.359003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.892 [2024-11-20 08:22:48.359011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:43.892 [2024-11-20 08:22:48.359018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:43.892 [2024-11-20 08:22:48.359024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.892 [2024-11-20 08:22:48.359586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.463 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:44.463 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:44.463 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:44.463 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:44.463 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:44.463 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:44.463 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # setup_nvmf_tgt /tmp/tmp.Sbd8on77ik 00:25:44.463 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Sbd8on77ik 00:25:44.463 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:44.724 [2024-11-20 08:22:49.219728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.724 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:44.724 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:44.984 [2024-11-20 08:22:49.540528] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:44.984 [2024-11-20 08:22:49.540762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:44.985 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:44.985 malloc0 00:25:45.245 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:45.245 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Sbd8on77ik 00:25:45.505 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:45.505 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # bdevperf_pid=2030584 00:25:45.505 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:45.505 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:45.505 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # waitforlisten 2030584 /var/tmp/bdevperf.sock 00:25:45.505 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2030584 ']' 00:25:45.505 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:45.505 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.505 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:45.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:45.505 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.505 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:45.765 [2024-11-20 08:22:50.249518] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:45.765 [2024-11-20 08:22:50.249573] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2030584 ] 00:25:45.765 [2024-11-20 08:22:50.340395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.765 [2024-11-20 08:22:50.370244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.337 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:46.337 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:46.337 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Sbd8on77ik 00:25:46.599 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:46.859 [2024-11-20 08:22:51.357806] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:46.859 nvme0n1 00:25:46.859 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:46.859 Running I/O for 1 seconds... 00:25:48.242 4879.00 IOPS, 19.06 MiB/s 00:25:48.242 Latency(us) 00:25:48.242 [2024-11-20T07:22:52.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.242 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:48.242 Verification LBA range: start 0x0 length 0x2000 00:25:48.242 nvme0n1 : 1.05 4774.03 18.65 0.00 0.00 26229.26 4614.83 45001.39 00:25:48.242 [2024-11-20T07:22:52.971Z] =================================================================================================================== 00:25:48.242 [2024-11-20T07:22:52.971Z] Total : 4774.03 18.65 0.00 0.00 26229.26 4614.83 45001.39 00:25:48.242 { 00:25:48.242 "results": [ 00:25:48.242 { 00:25:48.242 "job": "nvme0n1", 00:25:48.242 "core_mask": "0x2", 00:25:48.242 "workload": "verify", 00:25:48.242 "status": "finished", 00:25:48.242 "verify_range": { 00:25:48.242 "start": 0, 00:25:48.242 "length": 8192 00:25:48.242 }, 00:25:48.242 "queue_depth": 128, 00:25:48.242 "io_size": 4096, 00:25:48.242 "runtime": 1.0488, 00:25:48.242 "iops": 4774.027459954234, 00:25:48.242 "mibps": 18.648544765446225, 00:25:48.242 "io_failed": 0, 00:25:48.242 "io_timeout": 0, 00:25:48.242 "avg_latency_us": 26229.25860062579, 00:25:48.242 "min_latency_us": 4614.826666666667, 00:25:48.242 "max_latency_us": 45001.386666666665 00:25:48.242 } 00:25:48.242 ], 00:25:48.242 "core_count": 1 00:25:48.242 } 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@231 -- # killprocess 2030584 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2030584 ']' 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2030584 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2030584 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2030584' 00:25:48.242 killing process with pid 2030584 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2030584 00:25:48.242 Received shutdown signal, test time was about 1.000000 seconds 00:25:48.242 00:25:48.242 Latency(us) 00:25:48.242 [2024-11-20T07:22:52.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.242 [2024-11-20T07:22:52.971Z] =================================================================================================================== 00:25:48.242 [2024-11-20T07:22:52.971Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2030584 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # killprocess 2030217 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2030217 ']' 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2030217 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2030217 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2030217' 00:25:48.242 killing process with pid 2030217 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2030217 00:25:48.242 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2030217 00:25:48.503 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # nvmfappstart 00:25:48.503 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:48.503 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:48.503 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:48.503 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2031210 00:25:48.503 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2031210 00:25:48.503 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:48.503 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2031210 ']' 00:25:48.503 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.503 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:48.503 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.503 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:48.503 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:48.503 [2024-11-20 08:22:53.031528] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:48.503 [2024-11-20 08:22:53.031587] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.503 [2024-11-20 08:22:53.116228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.503 [2024-11-20 08:22:53.151967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.503 [2024-11-20 08:22:53.152001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.503 [2024-11-20 08:22:53.152010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.503 [2024-11-20 08:22:53.152018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.503 [2024-11-20 08:22:53.152024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.503 [2024-11-20 08:22:53.152569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.443 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:49.443 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:49.443 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:49.443 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:49.443 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:49.443 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.443 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@238 -- # rpc_cmd 00:25:49.443 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.443 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:49.443 [2024-11-20 08:22:53.864582] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.443 malloc0 00:25:49.443 [2024-11-20 08:22:53.891271] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:49.443 [2024-11-20 08:22:53.891494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.444 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.444 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@251 -- # bdevperf_pid=2031288 00:25:49.444 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@253 -- # waitforlisten 2031288 /var/tmp/bdevperf.sock 00:25:49.444 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@249 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:49.444 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2031288 ']' 00:25:49.444 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:49.444 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.444 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:49.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:49.444 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.444 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:49.444 [2024-11-20 08:22:53.968830] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:49.444 [2024-11-20 08:22:53.968885] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2031288 ] 00:25:49.444 [2024-11-20 08:22:54.058164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.444 [2024-11-20 08:22:54.088256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.386 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.386 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:50.386 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Sbd8on77ik 00:25:50.386 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:50.386 [2024-11-20 08:22:55.076370] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:50.646 nvme0n1 00:25:50.646 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:50.646 Running I/O for 1 seconds... 00:25:51.587 4995.00 IOPS, 19.51 MiB/s 00:25:51.587 Latency(us) 00:25:51.587 [2024-11-20T07:22:56.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.587 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:51.587 Verification LBA range: start 0x0 length 0x2000 00:25:51.587 nvme0n1 : 1.04 4932.63 19.27 0.00 0.00 25592.95 6198.61 39321.60 00:25:51.587 [2024-11-20T07:22:56.316Z] =================================================================================================================== 00:25:51.587 [2024-11-20T07:22:56.316Z] Total : 4932.63 19.27 0.00 0.00 25592.95 6198.61 39321.60 00:25:51.587 { 00:25:51.587 "results": [ 00:25:51.587 { 00:25:51.587 "job": "nvme0n1", 00:25:51.587 "core_mask": "0x2", 00:25:51.587 "workload": "verify", 00:25:51.587 "status": "finished", 00:25:51.587 "verify_range": { 00:25:51.587 "start": 0, 00:25:51.587 "length": 8192 00:25:51.587 }, 00:25:51.587 "queue_depth": 128, 00:25:51.587 "io_size": 4096, 00:25:51.587 "runtime": 1.038797, 00:25:51.587 "iops": 4932.628800429728, 00:25:51.587 "mibps": 19.268081251678623, 00:25:51.587 "io_failed": 0, 00:25:51.587 "io_timeout": 0, 00:25:51.587 "avg_latency_us": 25592.95483736664, 00:25:51.587 "min_latency_us": 6198.613333333334, 00:25:51.587 "max_latency_us": 39321.6 00:25:51.587 } 00:25:51.587 ], 00:25:51.587 "core_count": 1 00:25:51.587 } 00:25:51.850 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # rpc_cmd save_config 00:25:51.850 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.850 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:51.850 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.850 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # tgtcfg='{ 00:25:51.850 "subsystems": [ 00:25:51.850 { 00:25:51.850 "subsystem": "keyring", 00:25:51.850 "config": [ 00:25:51.850 { 00:25:51.850 "method": "keyring_file_add_key", 00:25:51.850 "params": { 00:25:51.850 "name": "key0", 00:25:51.850 "path": "/tmp/tmp.Sbd8on77ik" 00:25:51.850 } 00:25:51.850 } 00:25:51.850 ] 00:25:51.850 }, 00:25:51.850 { 00:25:51.850 "subsystem": "iobuf", 00:25:51.850 "config": [ 00:25:51.850 { 00:25:51.850 "method": "iobuf_set_options", 00:25:51.850 "params": { 00:25:51.850 "small_pool_count": 8192, 00:25:51.850 "large_pool_count": 1024, 00:25:51.850 "small_bufsize": 8192, 00:25:51.850 "large_bufsize": 135168, 00:25:51.850 "enable_numa": false 00:25:51.850 } 00:25:51.850 } 00:25:51.850 ] 00:25:51.850 }, 00:25:51.850 { 00:25:51.850 "subsystem": "sock", 00:25:51.850 "config": [ 00:25:51.850 { 00:25:51.850 "method": "sock_set_default_impl", 00:25:51.850 "params": { 00:25:51.850 "impl_name": "posix" 00:25:51.850 } 00:25:51.850 }, 00:25:51.850 { 00:25:51.850 "method": "sock_impl_set_options", 00:25:51.850 "params": { 00:25:51.850 "impl_name": "ssl", 00:25:51.850 "recv_buf_size": 4096, 00:25:51.850 "send_buf_size": 4096, 00:25:51.850 "enable_recv_pipe": true, 00:25:51.850 "enable_quickack": false, 00:25:51.850 "enable_placement_id": 0, 00:25:51.850 "enable_zerocopy_send_server": true, 00:25:51.850 "enable_zerocopy_send_client": false, 00:25:51.850 "zerocopy_threshold": 0, 00:25:51.850 "tls_version": 0, 00:25:51.850 "enable_ktls": false 00:25:51.850 } 00:25:51.850 }, 00:25:51.850 { 00:25:51.850 "method": "sock_impl_set_options", 00:25:51.850 "params": { 00:25:51.850 "impl_name": "posix", 00:25:51.850 "recv_buf_size": 2097152, 00:25:51.850 "send_buf_size": 2097152, 00:25:51.850 "enable_recv_pipe": true, 00:25:51.850 "enable_quickack": false, 00:25:51.850 "enable_placement_id": 0, 00:25:51.850 "enable_zerocopy_send_server": true, 00:25:51.850 "enable_zerocopy_send_client": false, 00:25:51.850 "zerocopy_threshold": 0, 00:25:51.850 "tls_version": 0, 00:25:51.850 "enable_ktls": false 00:25:51.850 } 00:25:51.850 } 00:25:51.850 ] 00:25:51.850 }, 00:25:51.850 { 00:25:51.850 "subsystem": "vmd", 00:25:51.850 "config": [] 00:25:51.850 }, 00:25:51.850 { 00:25:51.850 "subsystem": "accel", 00:25:51.850 "config": [ 00:25:51.850 { 00:25:51.850 "method": "accel_set_options", 00:25:51.850 "params": { 00:25:51.850 "small_cache_size": 128, 00:25:51.850 "large_cache_size": 16, 00:25:51.850 "task_count": 2048, 00:25:51.850 "sequence_count": 2048, 00:25:51.850 "buf_count": 2048 00:25:51.850 } 00:25:51.850 } 00:25:51.850 ] 00:25:51.850 }, 00:25:51.850 { 00:25:51.850 "subsystem": "bdev", 00:25:51.850 "config": [ 00:25:51.850 { 00:25:51.850 "method": "bdev_set_options", 00:25:51.850 "params": { 00:25:51.850 "bdev_io_pool_size": 65535, 00:25:51.850 "bdev_io_cache_size": 256, 00:25:51.850 "bdev_auto_examine": true, 00:25:51.850 "iobuf_small_cache_size": 128, 00:25:51.850 "iobuf_large_cache_size": 16 00:25:51.850 } 00:25:51.850 }, 00:25:51.850 { 00:25:51.850 "method": "bdev_raid_set_options", 00:25:51.850 "params": { 00:25:51.850 "process_window_size_kb": 1024, 00:25:51.850 "process_max_bandwidth_mb_sec": 0 00:25:51.850 } 00:25:51.850 }, 00:25:51.850 { 00:25:51.850 "method": "bdev_iscsi_set_options", 00:25:51.850 "params": { 00:25:51.850 "timeout_sec": 30 00:25:51.850 } 00:25:51.850 }, 00:25:51.850 { 00:25:51.850 "method": "bdev_nvme_set_options", 00:25:51.850 "params": { 00:25:51.850 "action_on_timeout": "none", 00:25:51.850 "timeout_us": 0, 00:25:51.850 "timeout_admin_us": 0, 00:25:51.850 "keep_alive_timeout_ms": 10000, 00:25:51.850 "arbitration_burst": 0, 00:25:51.850 "low_priority_weight": 0, 00:25:51.850 "medium_priority_weight": 0, 00:25:51.850 "high_priority_weight": 0, 00:25:51.850 "nvme_adminq_poll_period_us": 10000, 00:25:51.850 "nvme_ioq_poll_period_us": 0, 00:25:51.850 "io_queue_requests": 0, 00:25:51.850 "delay_cmd_submit": true, 00:25:51.850 "transport_retry_count": 4, 00:25:51.850 "bdev_retry_count": 3, 00:25:51.850 "transport_ack_timeout": 0, 00:25:51.850 "ctrlr_loss_timeout_sec": 0, 00:25:51.850 "reconnect_delay_sec": 0, 00:25:51.850 "fast_io_fail_timeout_sec": 0, 00:25:51.850 "disable_auto_failback": false, 00:25:51.850 "generate_uuids": false, 00:25:51.850 "transport_tos": 0, 00:25:51.850 "nvme_error_stat": false, 00:25:51.850 "rdma_srq_size": 0, 00:25:51.850 "io_path_stat": false, 00:25:51.850 "allow_accel_sequence": false, 00:25:51.850 "rdma_max_cq_size": 0, 00:25:51.850 "rdma_cm_event_timeout_ms": 0, 00:25:51.850 "dhchap_digests": [ 00:25:51.850 "sha256", 00:25:51.850 "sha384", 00:25:51.850 "sha512" 00:25:51.850 ], 00:25:51.851 "dhchap_dhgroups": [ 00:25:51.851 "null", 00:25:51.851 "ffdhe2048", 00:25:51.851 "ffdhe3072", 00:25:51.851 "ffdhe4096", 00:25:51.851 "ffdhe6144", 00:25:51.851 "ffdhe8192" 00:25:51.851 ] 00:25:51.851 } 00:25:51.851 }, 00:25:51.851 { 00:25:51.851 "method": "bdev_nvme_set_hotplug", 00:25:51.851 "params": { 00:25:51.851 "period_us": 100000, 00:25:51.851 "enable": false 00:25:51.851 } 00:25:51.851 }, 00:25:51.851 { 00:25:51.851 "method": "bdev_malloc_create", 00:25:51.851 "params": { 00:25:51.851 "name": "malloc0", 00:25:51.851 "num_blocks": 8192, 00:25:51.851 "block_size": 4096, 00:25:51.851 "physical_block_size": 4096, 00:25:51.851 "uuid": "174546a0-f1d0-4154-b4d8-b1bd8ca883de", 00:25:51.851 "optimal_io_boundary": 0, 00:25:51.851 "md_size": 0, 00:25:51.851 "dif_type": 0, 00:25:51.851 "dif_is_head_of_md": false, 00:25:51.851 "dif_pi_format": 0 00:25:51.851 } 00:25:51.851 }, 00:25:51.851 { 00:25:51.851 "method": "bdev_wait_for_examine" 00:25:51.851 } 00:25:51.851 ] 00:25:51.851 }, 00:25:51.851 { 00:25:51.851 "subsystem": "nbd", 00:25:51.851 "config": [] 00:25:51.851 }, 00:25:51.851 { 00:25:51.851 "subsystem": "scheduler", 00:25:51.851 "config": [ 00:25:51.851 { 00:25:51.851 "method": "framework_set_scheduler", 00:25:51.851 "params": { 00:25:51.851 "name": "static" 00:25:51.851 } 00:25:51.851 } 00:25:51.851 ] 00:25:51.851 }, 00:25:51.851 { 00:25:51.851 "subsystem": "nvmf", 00:25:51.851 "config": [ 00:25:51.851 { 00:25:51.851 "method": "nvmf_set_config", 00:25:51.851 "params": { 00:25:51.851 "discovery_filter": "match_any", 00:25:51.851 "admin_cmd_passthru": { 00:25:51.851 "identify_ctrlr": false 00:25:51.851 }, 00:25:51.851 "dhchap_digests": [ 00:25:51.851 "sha256", 00:25:51.851 "sha384", 00:25:51.851 "sha512" 00:25:51.851 ], 00:25:51.851 "dhchap_dhgroups": [ 00:25:51.851 "null", 00:25:51.851 "ffdhe2048", 00:25:51.851 "ffdhe3072", 00:25:51.851 "ffdhe4096", 00:25:51.851 "ffdhe6144", 00:25:51.851 "ffdhe8192" 00:25:51.851 ] 00:25:51.851 } 00:25:51.851 }, 00:25:51.851 { 00:25:51.851 "method": "nvmf_set_max_subsystems", 00:25:51.851 "params": { 00:25:51.851 "max_subsystems": 1024 00:25:51.851 } 00:25:51.851 }, 00:25:51.851 { 00:25:51.851 "method": "nvmf_set_crdt", 00:25:51.851 "params": { 00:25:51.851 "crdt1": 0, 00:25:51.851 "crdt2": 0, 00:25:51.851 "crdt3": 0 00:25:51.851 } 00:25:51.851 }, 00:25:51.851 { 00:25:51.851 "method": "nvmf_create_transport", 00:25:51.851 "params": { 00:25:51.851 "trtype": "TCP", 00:25:51.851 "max_queue_depth": 128, 00:25:51.851 "max_io_qpairs_per_ctrlr": 127, 00:25:51.851 "in_capsule_data_size": 4096, 00:25:51.851 "max_io_size": 131072, 00:25:51.851 "io_unit_size": 131072, 00:25:51.851 "max_aq_depth": 128, 00:25:51.851 "num_shared_buffers": 511, 00:25:51.851 "buf_cache_size": 4294967295, 00:25:51.851 "dif_insert_or_strip": false, 00:25:51.851 "zcopy": false, 00:25:51.851 "c2h_success": false, 00:25:51.851 "sock_priority": 0, 00:25:51.851 "abort_timeout_sec": 1, 00:25:51.851 "ack_timeout": 0, 00:25:51.851 "data_wr_pool_size": 0 00:25:51.851 } 00:25:51.851 }, 00:25:51.851 { 00:25:51.851 "method": "nvmf_create_subsystem", 00:25:51.851 "params": { 00:25:51.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:51.851 "allow_any_host": false, 00:25:51.851 "serial_number": "00000000000000000000", 00:25:51.851 "model_number": "SPDK bdev Controller", 00:25:51.851 "max_namespaces": 32, 00:25:51.851 "min_cntlid": 1, 00:25:51.851 "max_cntlid": 65519, 00:25:51.851 "ana_reporting": false 00:25:51.851 } 00:25:51.851 }, 00:25:51.851 { 00:25:51.851 "method": "nvmf_subsystem_add_host", 00:25:51.851 "params": { 00:25:51.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:51.851 "host": "nqn.2016-06.io.spdk:host1", 00:25:51.851 "psk": "key0" 00:25:51.851 } 00:25:51.851 }, 00:25:51.851 { 00:25:51.851 "method": "nvmf_subsystem_add_ns", 00:25:51.851 "params": { 00:25:51.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:51.851 "namespace": { 00:25:51.851 "nsid": 1, 00:25:51.851 "bdev_name": "malloc0", 00:25:51.851 "nguid": "174546A0F1D04154B4D8B1BD8CA883DE", 00:25:51.851 "uuid": "174546a0-f1d0-4154-b4d8-b1bd8ca883de", 00:25:51.851 "no_auto_visible": false 00:25:51.851 } 00:25:51.851 } 00:25:51.851 }, 00:25:51.851 { 00:25:51.851 "method": "nvmf_subsystem_add_listener", 00:25:51.851 "params": { 00:25:51.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:51.851 "listen_address": { 00:25:51.851 "trtype": "TCP", 00:25:51.851 "adrfam": "IPv4", 00:25:51.851 "traddr": "10.0.0.2", 00:25:51.851 "trsvcid": "4420" 00:25:51.851 }, 00:25:51.851 "secure_channel": false, 00:25:51.851 "sock_impl": "ssl" 00:25:51.851 } 00:25:51.851 } 00:25:51.851 ] 00:25:51.851 } 00:25:51.851 ] 00:25:51.851 }' 00:25:51.851 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@263 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:52.113 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@263 -- # bperfcfg='{ 00:25:52.113 "subsystems": [ 00:25:52.113 { 00:25:52.113 "subsystem": "keyring", 00:25:52.113 "config": [ 00:25:52.113 { 00:25:52.113 "method": "keyring_file_add_key", 00:25:52.113 "params": { 00:25:52.113 "name": "key0", 00:25:52.113 "path": "/tmp/tmp.Sbd8on77ik" 00:25:52.113 } 00:25:52.113 } 00:25:52.113 ] 00:25:52.113 }, 00:25:52.113 { 00:25:52.113 "subsystem": "iobuf", 00:25:52.113 "config": [ 00:25:52.113 { 00:25:52.113 "method": "iobuf_set_options", 00:25:52.113 "params": { 00:25:52.113 "small_pool_count": 8192, 00:25:52.113 "large_pool_count": 1024, 00:25:52.113 "small_bufsize": 8192, 00:25:52.113 "large_bufsize": 135168, 00:25:52.113 "enable_numa": false 00:25:52.113 } 00:25:52.113 } 00:25:52.113 ] 00:25:52.113 }, 00:25:52.113 { 00:25:52.113 "subsystem": "sock", 00:25:52.113 "config": [ 00:25:52.113 { 00:25:52.113 "method": "sock_set_default_impl", 00:25:52.113 "params": { 00:25:52.113 "impl_name": "posix" 00:25:52.113 } 00:25:52.113 }, 00:25:52.113 { 00:25:52.113 "method": "sock_impl_set_options", 00:25:52.113 "params": { 00:25:52.113 "impl_name": "ssl", 00:25:52.113 "recv_buf_size": 4096, 00:25:52.113 "send_buf_size": 4096, 00:25:52.113 "enable_recv_pipe": true, 00:25:52.113 "enable_quickack": false, 00:25:52.113 "enable_placement_id": 0, 00:25:52.113 "enable_zerocopy_send_server": true, 00:25:52.113 "enable_zerocopy_send_client": false, 00:25:52.113 "zerocopy_threshold": 0, 00:25:52.113 "tls_version": 0, 00:25:52.113 "enable_ktls": false 00:25:52.113 } 00:25:52.113 }, 00:25:52.113 { 00:25:52.113 "method": "sock_impl_set_options", 00:25:52.113 "params": { 00:25:52.113 "impl_name": "posix", 00:25:52.113 "recv_buf_size": 2097152, 00:25:52.113 "send_buf_size": 2097152, 00:25:52.113 "enable_recv_pipe": true, 00:25:52.113 "enable_quickack": false, 00:25:52.113 "enable_placement_id": 0, 00:25:52.113 "enable_zerocopy_send_server": true, 00:25:52.113 "enable_zerocopy_send_client": false, 00:25:52.113 "zerocopy_threshold": 0, 00:25:52.113 "tls_version": 0, 00:25:52.113 "enable_ktls": false 00:25:52.113 } 00:25:52.113 } 00:25:52.113 ] 00:25:52.113 }, 00:25:52.113 { 00:25:52.113 "subsystem": "vmd", 00:25:52.113 "config": [] 00:25:52.113 }, 00:25:52.113 { 00:25:52.113 "subsystem": "accel", 00:25:52.113 "config": [ 00:25:52.113 { 00:25:52.113 "method": "accel_set_options", 00:25:52.113 "params": { 00:25:52.113 "small_cache_size": 128, 00:25:52.113 "large_cache_size": 16, 00:25:52.113 "task_count": 2048, 00:25:52.113 "sequence_count": 2048, 00:25:52.113 "buf_count": 2048 00:25:52.113 } 00:25:52.113 } 00:25:52.113 ] 00:25:52.113 }, 00:25:52.113 { 00:25:52.113 "subsystem": "bdev", 00:25:52.113 "config": [ 00:25:52.113 { 00:25:52.113 "method": "bdev_set_options", 00:25:52.113 "params": { 00:25:52.113 "bdev_io_pool_size": 65535, 00:25:52.113 "bdev_io_cache_size": 256, 00:25:52.113 "bdev_auto_examine": true, 00:25:52.113 "iobuf_small_cache_size": 128, 00:25:52.113 "iobuf_large_cache_size": 16 00:25:52.113 } 00:25:52.113 }, 00:25:52.113 { 00:25:52.113 "method": "bdev_raid_set_options", 00:25:52.113 "params": { 00:25:52.113 "process_window_size_kb": 1024, 00:25:52.113 "process_max_bandwidth_mb_sec": 0 00:25:52.113 } 00:25:52.113 }, 00:25:52.113 { 00:25:52.113 "method": "bdev_iscsi_set_options", 00:25:52.113 "params": { 00:25:52.113 "timeout_sec": 30 00:25:52.113 } 00:25:52.113 }, 00:25:52.113 { 00:25:52.113 "method": "bdev_nvme_set_options", 00:25:52.113 "params": { 00:25:52.113 "action_on_timeout": "none", 00:25:52.113 "timeout_us": 0, 00:25:52.113 "timeout_admin_us": 0, 00:25:52.113 "keep_alive_timeout_ms": 10000, 00:25:52.113 "arbitration_burst": 0, 00:25:52.113 "low_priority_weight": 0, 00:25:52.113 "medium_priority_weight": 0, 00:25:52.113 "high_priority_weight": 0, 00:25:52.113 "nvme_adminq_poll_period_us": 10000, 00:25:52.113 "nvme_ioq_poll_period_us": 0, 00:25:52.113 "io_queue_requests": 512, 00:25:52.113 "delay_cmd_submit": true, 00:25:52.113 "transport_retry_count": 4, 00:25:52.113 "bdev_retry_count": 3, 00:25:52.113 "transport_ack_timeout": 0, 00:25:52.113 "ctrlr_loss_timeout_sec": 0, 00:25:52.113 "reconnect_delay_sec": 0, 00:25:52.113 "fast_io_fail_timeout_sec": 0, 00:25:52.113 "disable_auto_failback": false, 00:25:52.113 "generate_uuids": false, 00:25:52.113 "transport_tos": 0, 00:25:52.113 "nvme_error_stat": false, 00:25:52.113 "rdma_srq_size": 0, 00:25:52.113 "io_path_stat": false, 00:25:52.113 "allow_accel_sequence": false, 00:25:52.113 "rdma_max_cq_size": 0, 00:25:52.113 "rdma_cm_event_timeout_ms": 0, 00:25:52.113 "dhchap_digests": [ 00:25:52.113 "sha256", 00:25:52.113 "sha384", 00:25:52.113 "sha512" 00:25:52.113 ], 00:25:52.113 "dhchap_dhgroups": [ 00:25:52.113 "null", 00:25:52.113 "ffdhe2048", 00:25:52.113 "ffdhe3072", 00:25:52.113 "ffdhe4096", 00:25:52.113 "ffdhe6144", 00:25:52.113 "ffdhe8192" 00:25:52.113 ] 00:25:52.113 } 00:25:52.113 }, 00:25:52.113 { 00:25:52.113 "method": "bdev_nvme_attach_controller", 00:25:52.113 "params": { 00:25:52.113 "name": "nvme0", 00:25:52.113 "trtype": "TCP", 00:25:52.113 "adrfam": "IPv4", 00:25:52.113 "traddr": "10.0.0.2", 00:25:52.113 "trsvcid": "4420", 00:25:52.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.113 "prchk_reftag": false, 00:25:52.113 "prchk_guard": false, 00:25:52.113 "ctrlr_loss_timeout_sec": 0, 00:25:52.113 "reconnect_delay_sec": 0, 00:25:52.113 "fast_io_fail_timeout_sec": 0, 00:25:52.113 "psk": "key0", 00:25:52.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:52.113 "hdgst": false, 00:25:52.113 "ddgst": false, 00:25:52.113 "multipath": "multipath" 00:25:52.113 } 00:25:52.113 }, 00:25:52.113 { 00:25:52.114 "method": "bdev_nvme_set_hotplug", 00:25:52.114 "params": { 00:25:52.114 "period_us": 100000, 00:25:52.114 "enable": false 00:25:52.114 } 00:25:52.114 }, 00:25:52.114 { 00:25:52.114 "method": "bdev_enable_histogram", 00:25:52.114 "params": { 00:25:52.114 "name": "nvme0n1", 00:25:52.114 "enable": true 00:25:52.114 } 00:25:52.114 }, 00:25:52.114 { 00:25:52.114 "method": "bdev_wait_for_examine" 00:25:52.114 } 00:25:52.114 ] 00:25:52.114 }, 00:25:52.114 { 00:25:52.114 "subsystem": "nbd", 00:25:52.114 "config": [] 00:25:52.114 } 00:25:52.114 ] 00:25:52.114 }' 00:25:52.114 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # killprocess 2031288 00:25:52.114 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2031288 ']' 00:25:52.114 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2031288 00:25:52.114 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:52.114 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.114 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2031288 00:25:52.114 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:52.114 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:52.114 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2031288' 00:25:52.114 killing process with pid 2031288 00:25:52.114 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2031288 00:25:52.114 Received shutdown signal, test time was about 1.000000 seconds 00:25:52.114 00:25:52.114 Latency(us) 00:25:52.114 [2024-11-20T07:22:56.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.114 [2024-11-20T07:22:56.843Z] =================================================================================================================== 00:25:52.114 [2024-11-20T07:22:56.843Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:52.114 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2031288 00:25:52.374 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # killprocess 2031210 00:25:52.374 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2031210 ']' 00:25:52.375 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2031210 00:25:52.375 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:52.375 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.375 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2031210 00:25:52.375 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:52.375 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:52.375 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2031210' 00:25:52.375 killing process with pid 2031210 00:25:52.375 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2031210 00:25:52.375 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2031210 00:25:52.375 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # nvmfappstart -c /dev/fd/62 00:25:52.375 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:52.375 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:52.375 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # echo '{ 00:25:52.375 "subsystems": [ 00:25:52.375 { 00:25:52.375 "subsystem": "keyring", 00:25:52.375 "config": [ 00:25:52.375 { 00:25:52.375 "method": "keyring_file_add_key", 00:25:52.375 "params": { 00:25:52.375 "name": "key0", 00:25:52.375 "path": "/tmp/tmp.Sbd8on77ik" 00:25:52.375 } 00:25:52.375 } 00:25:52.375 ] 00:25:52.375 }, 00:25:52.375 { 00:25:52.375 "subsystem": "iobuf", 00:25:52.375 "config": [ 00:25:52.375 { 00:25:52.375 "method": "iobuf_set_options", 00:25:52.375 "params": { 00:25:52.375 "small_pool_count": 8192, 00:25:52.375 "large_pool_count": 1024, 00:25:52.375 "small_bufsize": 8192, 00:25:52.375 "large_bufsize": 135168, 00:25:52.375 "enable_numa": false 00:25:52.375 } 00:25:52.375 } 00:25:52.375 ] 00:25:52.375 }, 00:25:52.375 { 00:25:52.375 "subsystem": "sock", 00:25:52.375 "config": [ 00:25:52.375 { 00:25:52.375 "method": "sock_set_default_impl", 00:25:52.375 "params": { 00:25:52.375 "impl_name": "posix" 00:25:52.375 } 00:25:52.375 }, 00:25:52.375 { 00:25:52.375 "method": "sock_impl_set_options", 00:25:52.375 "params": { 00:25:52.375 "impl_name": "ssl", 00:25:52.375 "recv_buf_size": 4096, 00:25:52.375 "send_buf_size": 4096, 00:25:52.375 "enable_recv_pipe": true, 00:25:52.375 "enable_quickack": false, 00:25:52.375 "enable_placement_id": 0, 00:25:52.375 "enable_zerocopy_send_server": true, 00:25:52.375 "enable_zerocopy_send_client": false, 00:25:52.375 "zerocopy_threshold": 0, 00:25:52.375 "tls_version": 0, 00:25:52.375 "enable_ktls": false 00:25:52.375 } 00:25:52.375 }, 00:25:52.375 { 00:25:52.375 "method": "sock_impl_set_options", 00:25:52.375 "params": { 00:25:52.375 "impl_name": "posix", 00:25:52.375 "recv_buf_size": 2097152, 00:25:52.375 "send_buf_size": 2097152, 00:25:52.375 "enable_recv_pipe": true, 00:25:52.375 "enable_quickack": false, 00:25:52.375 "enable_placement_id": 0, 00:25:52.375 "enable_zerocopy_send_server": true, 00:25:52.375 "enable_zerocopy_send_client": false, 00:25:52.375 "zerocopy_threshold": 0, 00:25:52.375 "tls_version": 0, 00:25:52.375 "enable_ktls": false 00:25:52.375 } 00:25:52.375 } 00:25:52.375 ] 00:25:52.375 }, 00:25:52.375 { 00:25:52.375 "subsystem": "vmd", 00:25:52.375 "config": [] 00:25:52.375 }, 00:25:52.375 { 00:25:52.375 "subsystem": "accel", 00:25:52.375 "config": [ 00:25:52.375 { 00:25:52.375 "method": "accel_set_options", 00:25:52.375 "params": { 00:25:52.375 "small_cache_size": 128, 00:25:52.375 "large_cache_size": 16, 00:25:52.375 "task_count": 2048, 00:25:52.375 "sequence_count": 2048, 00:25:52.375 "buf_count": 2048 00:25:52.375 } 00:25:52.375 } 00:25:52.375 ] 00:25:52.375 }, 00:25:52.375 { 00:25:52.375 "subsystem": "bdev", 00:25:52.375 "config": [ 00:25:52.375 { 00:25:52.375 "method": "bdev_set_options", 00:25:52.375 "params": { 00:25:52.375 "bdev_io_pool_size": 65535, 00:25:52.375 "bdev_io_cache_size": 256, 00:25:52.375 "bdev_auto_examine": true, 00:25:52.375 "iobuf_small_cache_size": 128, 00:25:52.375 "iobuf_large_cache_size": 16 00:25:52.375 } 00:25:52.375 }, 00:25:52.375 { 00:25:52.375 "method": "bdev_raid_set_options", 00:25:52.375 "params": { 00:25:52.375 "process_window_size_kb": 1024, 00:25:52.375 "process_max_bandwidth_mb_sec": 0 00:25:52.375 } 00:25:52.375 }, 00:25:52.375 { 00:25:52.375 "method": "bdev_iscsi_set_options", 00:25:52.375 "params": { 00:25:52.375 "timeout_sec": 30 00:25:52.375 } 00:25:52.375 }, 00:25:52.375 { 00:25:52.375 "method": "bdev_nvme_set_options", 00:25:52.375 "params": { 00:25:52.375 "action_on_timeout": "none", 00:25:52.375 "timeout_us": 0, 00:25:52.375 "timeout_admin_us": 0, 00:25:52.375 "keep_alive_timeout_ms": 10000, 00:25:52.375 "arbitration_burst": 0, 00:25:52.375 "low_priority_weight": 0, 00:25:52.375 "medium_priority_weight": 0, 00:25:52.375 "high_priority_weight": 0, 00:25:52.375 "nvme_adminq_poll_period_us": 10000, 00:25:52.375 "nvme_ioq_poll_period_us": 0, 00:25:52.375 "io_queue_requests": 0, 00:25:52.375 "delay_cmd_submit": true, 00:25:52.375 "transport_retry_count": 4, 00:25:52.375 "bdev_retry_count": 3, 00:25:52.375 "transport_ack_timeout": 0, 00:25:52.375 "ctrlr_loss_timeout_sec": 0, 00:25:52.375 "reconnect_delay_sec": 0, 00:25:52.375 "fast_io_fail_timeout_sec": 0, 00:25:52.376 "disable_auto_failback": false, 00:25:52.376 "generate_uuids": false, 00:25:52.376 "transport_tos": 0, 00:25:52.376 "nvme_error_stat": false, 00:25:52.376 "rdma_srq_size": 0, 00:25:52.376 "io_path_stat": false, 00:25:52.376 "allow_accel_sequence": false, 00:25:52.376 "rdma_max_cq_size": 0, 00:25:52.376 "rdma_cm_event_timeout_ms": 0, 00:25:52.376 "dhchap_digests": [ 00:25:52.376 "sha256", 00:25:52.376 "sha384", 00:25:52.376 "sha512" 00:25:52.376 ], 00:25:52.376 "dhchap_dhgroups": [ 00:25:52.376 "null", 00:25:52.376 "ffdhe2048", 00:25:52.376 "ffdhe3072", 00:25:52.376 "ffdhe4096", 00:25:52.376 "ffdhe6144", 00:25:52.376 "ffdhe8192" 00:25:52.376 ] 00:25:52.376 } 00:25:52.376 }, 00:25:52.376 { 00:25:52.376 "method": "bdev_nvme_set_hotplug", 00:25:52.376 "params": { 00:25:52.376 "period_us": 100000, 00:25:52.376 "enable": false 00:25:52.376 } 00:25:52.376 }, 00:25:52.376 { 00:25:52.376 "method": "bdev_malloc_create", 00:25:52.376 "params": { 00:25:52.376 "name": "malloc0", 00:25:52.376 "num_blocks": 8192, 00:25:52.376 "block_size": 4096, 00:25:52.376 "physical_block_size": 4096, 00:25:52.376 "uuid": "174546a0-f1d0-4154-b4d8-b1bd8ca883de", 00:25:52.376 "optimal_io_boundary": 0, 00:25:52.376 "md_size": 0, 00:25:52.376 "dif_type": 0, 00:25:52.376 "dif_is_head_of_md": false, 00:25:52.376 "dif_pi_format": 0 00:25:52.376 } 00:25:52.376 }, 00:25:52.376 { 00:25:52.376 "method": "bdev_wait_for_examine" 00:25:52.376 } 00:25:52.376 ] 00:25:52.376 }, 00:25:52.376 { 00:25:52.376 "subsystem": "nbd", 00:25:52.376 "config": [] 00:25:52.376 }, 00:25:52.376 { 00:25:52.376 "subsystem": "scheduler", 00:25:52.376 "config": [ 00:25:52.376 { 00:25:52.376 "method": "framework_set_scheduler", 00:25:52.376 "params": { 00:25:52.376 "name": "static" 00:25:52.376 } 00:25:52.376 } 00:25:52.376 ] 00:25:52.376 }, 00:25:52.376 { 00:25:52.376 "subsystem": "nvmf", 00:25:52.376 "config": [ 00:25:52.376 { 00:25:52.376 "method": "nvmf_set_config", 00:25:52.376 "params": { 00:25:52.376 "discovery_filter": "match_any", 00:25:52.376 "admin_cmd_passthru": { 00:25:52.376 "identify_ctrlr": false 00:25:52.376 }, 00:25:52.376 "dhchap_digests": [ 00:25:52.376 "sha256", 00:25:52.376 "sha384", 00:25:52.376 "sha512" 00:25:52.376 ], 00:25:52.376 "dhchap_dhgroups": [ 00:25:52.376 "null", 00:25:52.376 "ffdhe2048", 00:25:52.376 "ffdhe3072", 00:25:52.376 "ffdhe4096", 00:25:52.376 "ffdhe6144", 00:25:52.376 "ffdhe8192" 00:25:52.376 ] 00:25:52.376 } 00:25:52.376 }, 00:25:52.376 { 00:25:52.376 "method": "nvmf_set_max_subsystems", 00:25:52.376 "params": { 00:25:52.376 "max_subsystems": 1024 00:25:52.376 } 00:25:52.376 }, 00:25:52.376 { 00:25:52.376 "method": "nvmf_set_crdt", 00:25:52.376 "params": { 00:25:52.376 "crdt1": 0, 00:25:52.376 "crdt2": 0, 00:25:52.376 "crdt3": 0 00:25:52.376 } 00:25:52.376 }, 00:25:52.376 { 00:25:52.376 "method": "nvmf_create_transport", 00:25:52.376 "params": { 00:25:52.376 "trtype": "TCP", 00:25:52.376 "max_queue_depth": 128, 00:25:52.376 "max_io_qpairs_per_ctrlr": 127, 00:25:52.376 "in_capsule_data_size": 4096, 00:25:52.376 "max_io_size": 131072, 00:25:52.376 "io_unit_size": 131072, 00:25:52.376 "max_aq_depth": 128, 00:25:52.376 "num_shared_buffers": 511, 00:25:52.376 "buf_cache_size": 4294967295, 00:25:52.376 "dif_insert_or_strip": false, 00:25:52.376 "zcopy": false, 00:25:52.376 "c2h_success": false, 00:25:52.376 "sock_priority": 0, 00:25:52.376 "abort_timeout_sec": 1, 00:25:52.376 "ack_timeout": 0, 00:25:52.376 "data_wr_pool_size": 0 00:25:52.376 } 00:25:52.376 }, 00:25:52.376 { 00:25:52.376 "method": "nvmf_create_subsystem", 00:25:52.376 "params": { 00:25:52.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.376 "allow_any_host": false, 00:25:52.376 "serial_number": "00000000000000000000", 00:25:52.376 "model_number": "SPDK bdev Controller", 00:25:52.376 "max_namespaces": 32, 00:25:52.376 "min_cntlid": 1, 00:25:52.376 "max_cntlid": 65519, 00:25:52.376 "ana_reporting": false 00:25:52.376 } 00:25:52.376 }, 00:25:52.376 { 00:25:52.376 "method": "nvmf_subsystem_add_host", 00:25:52.376 "params": { 00:25:52.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.376 "host": "nqn.2016-06.io.spdk:host1", 00:25:52.376 "psk": "key0" 00:25:52.376 } 00:25:52.376 }, 00:25:52.376 { 00:25:52.376 "method": "nvmf_subsystem_add_ns", 00:25:52.376 "params": { 00:25:52.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.376 "namespace": { 00:25:52.376 "nsid": 1, 00:25:52.376 "bdev_name": "malloc0", 00:25:52.376 "nguid": "174546A0F1D04154B4D8B1BD8CA883DE", 00:25:52.376 "uuid": "174546a0-f1d0-4154-b4d8-b1bd8ca883de", 00:25:52.376 "no_auto_visible": false 00:25:52.376 } 00:25:52.376 } 00:25:52.376 }, 00:25:52.376 { 00:25:52.376 "method": "nvmf_subsystem_add_listener", 00:25:52.376 "params": { 00:25:52.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.376 "listen_address": { 00:25:52.376 "trtype": "TCP", 00:25:52.376 "adrfam": "IPv4", 00:25:52.376 "traddr": "10.0.0.2", 00:25:52.376 "trsvcid": "4420" 00:25:52.376 }, 00:25:52.376 "secure_channel": false, 00:25:52.376 "sock_impl": "ssl" 00:25:52.376 } 00:25:52.376 } 00:25:52.376 ] 00:25:52.376 } 00:25:52.376 ] 00:25:52.376 }' 00:25:52.376 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:52.376 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2031971 00:25:52.376 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2031971 00:25:52.376 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:52.376 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2031971 ']' 00:25:52.377 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.377 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:52.377 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.377 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:52.377 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:52.637 [2024-11-20 08:22:57.110146] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:52.637 [2024-11-20 08:22:57.110199] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.637 [2024-11-20 08:22:57.195302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.637 [2024-11-20 08:22:57.228961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.637 [2024-11-20 08:22:57.228995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.637 [2024-11-20 08:22:57.229007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.637 [2024-11-20 08:22:57.229014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.637 [2024-11-20 08:22:57.229020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.637 [2024-11-20 08:22:57.229592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.897 [2024-11-20 08:22:57.428233] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.897 [2024-11-20 08:22:57.460242] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:52.897 [2024-11-20 08:22:57.460473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # bdevperf_pid=2032172 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # waitforlisten 2032172 /var/tmp/bdevperf.sock 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2032172 ']' 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:53.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:53.469 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:25:53.469 "subsystems": [ 00:25:53.469 { 00:25:53.469 "subsystem": "keyring", 00:25:53.469 "config": [ 00:25:53.469 { 00:25:53.469 "method": "keyring_file_add_key", 00:25:53.469 "params": { 00:25:53.469 "name": "key0", 00:25:53.469 "path": "/tmp/tmp.Sbd8on77ik" 00:25:53.469 } 00:25:53.469 } 00:25:53.469 ] 00:25:53.469 }, 00:25:53.469 { 00:25:53.469 "subsystem": "iobuf", 00:25:53.469 "config": [ 00:25:53.469 { 00:25:53.469 "method": "iobuf_set_options", 00:25:53.469 "params": { 00:25:53.469 "small_pool_count": 8192, 00:25:53.469 "large_pool_count": 1024, 00:25:53.469 "small_bufsize": 8192, 00:25:53.469 "large_bufsize": 135168, 00:25:53.469 "enable_numa": false 00:25:53.469 } 00:25:53.469 } 00:25:53.469 ] 00:25:53.469 }, 00:25:53.469 { 00:25:53.469 "subsystem": "sock", 00:25:53.469 "config": [ 00:25:53.469 { 00:25:53.469 "method": "sock_set_default_impl", 00:25:53.469 "params": { 00:25:53.469 "impl_name": "posix" 00:25:53.469 } 00:25:53.469 }, 00:25:53.469 { 00:25:53.469 "method": "sock_impl_set_options", 00:25:53.469 "params": { 00:25:53.469 "impl_name": "ssl", 00:25:53.469 "recv_buf_size": 4096, 00:25:53.469 "send_buf_size": 4096, 00:25:53.469 "enable_recv_pipe": true, 00:25:53.469 "enable_quickack": false, 00:25:53.469 "enable_placement_id": 0, 00:25:53.469 "enable_zerocopy_send_server": true, 00:25:53.469 "enable_zerocopy_send_client": false, 00:25:53.469 "zerocopy_threshold": 0, 00:25:53.469 "tls_version": 0, 00:25:53.469 "enable_ktls": false 00:25:53.469 } 00:25:53.469 }, 00:25:53.469 { 00:25:53.469 "method": "sock_impl_set_options", 00:25:53.469 "params": { 00:25:53.469 "impl_name": "posix", 00:25:53.469 "recv_buf_size": 2097152, 00:25:53.469 "send_buf_size": 2097152, 00:25:53.469 "enable_recv_pipe": true, 00:25:53.469 "enable_quickack": false, 00:25:53.469 "enable_placement_id": 0, 00:25:53.469 "enable_zerocopy_send_server": true, 00:25:53.469 "enable_zerocopy_send_client": false, 00:25:53.469 "zerocopy_threshold": 0, 00:25:53.469 "tls_version": 0, 00:25:53.469 "enable_ktls": false 00:25:53.469 } 00:25:53.469 } 00:25:53.469 ] 00:25:53.469 }, 00:25:53.469 { 00:25:53.469 "subsystem": "vmd", 00:25:53.469 "config": [] 00:25:53.469 }, 00:25:53.469 { 00:25:53.469 "subsystem": "accel", 00:25:53.469 "config": [ 00:25:53.469 { 00:25:53.469 "method": "accel_set_options", 00:25:53.469 "params": { 00:25:53.469 "small_cache_size": 128, 00:25:53.469 "large_cache_size": 16, 00:25:53.469 "task_count": 2048, 00:25:53.469 "sequence_count": 2048, 00:25:53.469 "buf_count": 2048 00:25:53.469 } 00:25:53.469 } 00:25:53.469 ] 00:25:53.469 }, 00:25:53.469 { 00:25:53.469 "subsystem": "bdev", 00:25:53.469 "config": [ 00:25:53.469 { 00:25:53.469 "method": "bdev_set_options", 00:25:53.469 "params": { 00:25:53.469 "bdev_io_pool_size": 65535, 00:25:53.469 "bdev_io_cache_size": 256, 00:25:53.469 "bdev_auto_examine": true, 00:25:53.469 "iobuf_small_cache_size": 128, 00:25:53.469 "iobuf_large_cache_size": 16 00:25:53.469 } 00:25:53.469 }, 00:25:53.469 { 00:25:53.469 "method": "bdev_raid_set_options", 00:25:53.469 "params": { 00:25:53.469 "process_window_size_kb": 1024, 00:25:53.469 "process_max_bandwidth_mb_sec": 0 00:25:53.469 } 00:25:53.469 }, 00:25:53.469 { 00:25:53.469 "method": "bdev_iscsi_set_options", 00:25:53.469 "params": { 00:25:53.469 "timeout_sec": 30 00:25:53.469 } 00:25:53.469 }, 00:25:53.469 { 00:25:53.469 "method": "bdev_nvme_set_options", 00:25:53.469 "params": { 00:25:53.469 "action_on_timeout": "none", 00:25:53.469 "timeout_us": 0, 00:25:53.469 "timeout_admin_us": 0, 00:25:53.469 "keep_alive_timeout_ms": 10000, 00:25:53.469 "arbitration_burst": 0, 00:25:53.469 "low_priority_weight": 0, 00:25:53.469 "medium_priority_weight": 0, 00:25:53.469 "high_priority_weight": 0, 00:25:53.469 "nvme_adminq_poll_period_us": 10000, 00:25:53.469 "nvme_ioq_poll_period_us": 0, 00:25:53.469 "io_queue_requests": 512, 00:25:53.470 "delay_cmd_submit": true, 00:25:53.470 "transport_retry_count": 4, 00:25:53.470 "bdev_retry_count": 3, 00:25:53.470 "transport_ack_timeout": 0, 00:25:53.470 "ctrlr_loss_timeout_sec": 0, 00:25:53.470 "reconnect_delay_sec": 0, 00:25:53.470 "fast_io_fail_timeout_sec": 0, 00:25:53.470 "disable_auto_failback": false, 00:25:53.470 "generate_uuids": false, 00:25:53.470 "transport_tos": 0, 00:25:53.470 "nvme_error_stat": false, 00:25:53.470 "rdma_srq_size": 0, 00:25:53.470 "io_path_stat": false, 00:25:53.470 "allow_accel_sequence": false, 00:25:53.470 "rdma_max_cq_size": 0, 00:25:53.470 "rdma_cm_event_timeout_ms": 0, 00:25:53.470 "dhchap_digests": [ 00:25:53.470 "sha256", 00:25:53.470 "sha384", 00:25:53.470 "sha512" 00:25:53.470 ], 00:25:53.470 "dhchap_dhgroups": [ 00:25:53.470 "null", 00:25:53.470 "ffdhe2048", 00:25:53.470 "ffdhe3072", 00:25:53.470 "ffdhe4096", 00:25:53.470 "ffdhe6144", 00:25:53.470 "ffdhe8192" 00:25:53.470 ] 00:25:53.470 } 00:25:53.470 }, 00:25:53.470 { 00:25:53.470 "method": "bdev_nvme_attach_controller", 00:25:53.470 "params": { 00:25:53.470 "name": "nvme0", 00:25:53.470 "trtype": "TCP", 00:25:53.470 "adrfam": "IPv4", 00:25:53.470 "traddr": "10.0.0.2", 00:25:53.470 "trsvcid": "4420", 00:25:53.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.470 "prchk_reftag": false, 00:25:53.470 "prchk_guard": false, 00:25:53.470 "ctrlr_loss_timeout_sec": 0, 00:25:53.470 "reconnect_delay_sec": 0, 00:25:53.470 "fast_io_fail_timeout_sec": 0, 00:25:53.470 "psk": "key0", 00:25:53.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:53.470 "hdgst": false, 00:25:53.470 "ddgst": false, 00:25:53.470 "multipath": "multipath" 00:25:53.470 } 00:25:53.470 }, 00:25:53.470 { 00:25:53.470 "method": "bdev_nvme_set_hotplug", 00:25:53.470 "params": { 00:25:53.470 "period_us": 100000, 00:25:53.470 "enable": false 00:25:53.470 } 00:25:53.470 }, 00:25:53.470 { 00:25:53.470 "method": "bdev_enable_histogram", 00:25:53.470 "params": { 00:25:53.470 "name": "nvme0n1", 00:25:53.470 "enable": true 00:25:53.470 } 00:25:53.470 }, 00:25:53.470 { 00:25:53.470 "method": "bdev_wait_for_examine" 00:25:53.470 } 00:25:53.470 ] 00:25:53.470 }, 00:25:53.470 { 00:25:53.470 "subsystem": "nbd", 00:25:53.470 "config": [] 00:25:53.470 } 00:25:53.470 ] 00:25:53.470 }' 00:25:53.470 [2024-11-20 08:22:58.008821] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:25:53.470 [2024-11-20 08:22:58.008882] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2032172 ] 00:25:53.470 [2024-11-20 08:22:58.098856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.470 [2024-11-20 08:22:58.128966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.791 [2024-11-20 08:22:58.264171] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:54.095 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.095 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:54.095 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:54.095 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # jq -r '.[].name' 00:25:54.409 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.409 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:54.409 Running I/O for 1 seconds... 00:25:55.608 5426.00 IOPS, 21.20 MiB/s 00:25:55.608 Latency(us) 00:25:55.608 [2024-11-20T07:23:00.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.608 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:55.608 Verification LBA range: start 0x0 length 0x2000 00:25:55.608 nvme0n1 : 1.02 5449.48 21.29 0.00 0.00 23268.82 4614.83 67283.63 00:25:55.608 [2024-11-20T07:23:00.337Z] =================================================================================================================== 00:25:55.608 [2024-11-20T07:23:00.337Z] Total : 5449.48 21.29 0.00 0.00 23268.82 4614.83 67283.63 00:25:55.608 { 00:25:55.608 "results": [ 00:25:55.608 { 00:25:55.608 "job": "nvme0n1", 00:25:55.608 "core_mask": "0x2", 00:25:55.608 "workload": "verify", 00:25:55.608 "status": "finished", 00:25:55.608 "verify_range": { 00:25:55.608 "start": 0, 00:25:55.608 "length": 8192 00:25:55.608 }, 00:25:55.608 "queue_depth": 128, 00:25:55.608 "io_size": 4096, 00:25:55.608 "runtime": 1.019364, 00:25:55.608 "iops": 5449.476340149348, 00:25:55.608 "mibps": 21.28701695370839, 00:25:55.608 "io_failed": 0, 00:25:55.608 "io_timeout": 0, 00:25:55.608 "avg_latency_us": 23268.82421362136, 00:25:55.608 "min_latency_us": 4614.826666666667, 00:25:55.608 "max_latency_us": 67283.62666666666 00:25:55.608 } 00:25:55.608 ], 00:25:55.608 "core_count": 1 00:25:55.608 } 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # trap - SIGINT SIGTERM EXIT 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # cleanup 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:55.608 nvmf_trace.0 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2032172 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2032172 ']' 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2032172 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2032172 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2032172' 00:25:55.608 killing process with pid 2032172 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2032172 00:25:55.608 Received shutdown signal, test time was about 1.000000 seconds 00:25:55.608 00:25:55.608 Latency(us) 00:25:55.608 [2024-11-20T07:23:00.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.608 [2024-11-20T07:23:00.337Z] =================================================================================================================== 00:25:55.608 [2024-11-20T07:23:00.337Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:55.608 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2032172 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@99 -- # sync 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@102 -- # set +e 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:55.868 rmmod nvme_tcp 00:25:55.868 rmmod nvme_fabrics 00:25:55.868 rmmod nvme_keyring 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@106 -- # set -e 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@107 -- # return 0 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # '[' -n 2031971 ']' 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@337 -- # killprocess 2031971 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2031971 ']' 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2031971 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2031971 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2031971' 00:25:55.868 killing process with pid 2031971 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2031971 00:25:55.868 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2031971 00:25:56.129 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:56.129 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # nvmf_fini 00:25:56.129 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@254 -- # local dev 00:25:56.129 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:56.129 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:56.129 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:56.129 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # return 0 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:58.044 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:25:58.045 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:25:58.045 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:58.045 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # _dev=0 00:25:58.045 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # dev_map=() 00:25:58.045 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@274 -- # iptr 00:25:58.045 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-save 00:25:58.045 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:58.045 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-restore 00:25:58.045 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TqPjNcCBvE /tmp/tmp.lwlDrBVqGJ /tmp/tmp.Sbd8on77ik 00:25:58.045 00:25:58.045 real 1m24.312s 00:25:58.045 user 2m10.139s 00:25:58.045 sys 0m26.843s 00:25:58.045 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:58.045 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:58.045 ************************************ 00:25:58.045 END TEST nvmf_tls 00:25:58.045 ************************************ 00:25:58.306 08:23:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:58.306 08:23:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:58.306 08:23:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:58.306 08:23:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:58.306 ************************************ 00:25:58.306 START TEST nvmf_fips 00:25:58.306 ************************************ 00:25:58.306 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:58.306 * Looking for test storage... 00:25:58.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:58.306 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:58.306 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:25:58.306 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:58.306 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:58.306 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:58.306 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:58.306 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:58.306 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:58.306 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:58.306 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:58.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.306 --rc genhtml_branch_coverage=1 00:25:58.306 --rc genhtml_function_coverage=1 00:25:58.306 --rc genhtml_legend=1 00:25:58.306 --rc geninfo_all_blocks=1 00:25:58.306 --rc geninfo_unexecuted_blocks=1 00:25:58.306 00:25:58.306 ' 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:58.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.306 --rc genhtml_branch_coverage=1 00:25:58.306 --rc genhtml_function_coverage=1 00:25:58.306 --rc genhtml_legend=1 00:25:58.306 --rc geninfo_all_blocks=1 00:25:58.306 --rc geninfo_unexecuted_blocks=1 00:25:58.306 00:25:58.306 ' 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:58.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.306 --rc genhtml_branch_coverage=1 00:25:58.306 --rc genhtml_function_coverage=1 00:25:58.306 --rc genhtml_legend=1 00:25:58.306 --rc geninfo_all_blocks=1 00:25:58.306 --rc geninfo_unexecuted_blocks=1 00:25:58.306 00:25:58.306 ' 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:58.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.306 --rc genhtml_branch_coverage=1 00:25:58.306 --rc genhtml_function_coverage=1 00:25:58.306 --rc genhtml_legend=1 00:25:58.306 --rc geninfo_all_blocks=1 00:25:58.306 --rc geninfo_unexecuted_blocks=1 00:25:58.306 00:25:58.306 ' 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:58.306 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:58.307 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:58.307 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:58.307 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@50 -- # : 0 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:58.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:58.569 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:25:58.570 Error setting digest 00:25:58.570 40F29C07807F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:58.570 40F29C07807F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # remove_target_ns 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # xtrace_disable 00:25:58.570 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # pci_devs=() 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # net_devs=() 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # e810=() 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # local -ga e810 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # x722=() 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # local -ga x722 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # mlx=() 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # local -ga mlx 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.717 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:06.718 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:06.718 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:06.718 Found net devices under 0000:31:00.0: cvl_0_0 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:06.718 Found net devices under 0000:31:00.1: cvl_0_1 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # is_hw=yes 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@247 -- # create_target_ns 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@28 -- # local -g _dev 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:06.718 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772161 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:06.982 10.0.0.1 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772162 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:06.982 10.0.0.2 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:06.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.600 ms 00:26:06.982 00:26:06.982 --- 10.0.0.1 ping statistics --- 00:26:06.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.982 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:06.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:26:06.982 00:26:06.982 --- 10.0.0.2 ping statistics --- 00:26:06.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.982 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # return 0 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:26:06.982 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # return 1 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev= 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@160 -- # return 0 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:06.983 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:07.244 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target1 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # return 1 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev= 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@160 -- # return 0 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # nvmfpid=2037424 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # waitforlisten 2037424 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2037424 ']' 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:07.245 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:07.245 [2024-11-20 08:23:11.862537] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:26:07.245 [2024-11-20 08:23:11.862600] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.245 [2024-11-20 08:23:11.968652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.506 [2024-11-20 08:23:12.019246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.506 [2024-11-20 08:23:12.019300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.506 [2024-11-20 08:23:12.019309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.506 [2024-11-20 08:23:12.019317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.506 [2024-11-20 08:23:12.019324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.506 [2024-11-20 08:23:12.020135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.078 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:08.079 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:26:08.079 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:08.079 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:08.079 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:08.079 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.079 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:26:08.079 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:08.079 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:26:08.079 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.BsM 00:26:08.079 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:08.079 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.BsM 00:26:08.079 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.BsM 00:26:08.079 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.BsM 00:26:08.079 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:08.339 [2024-11-20 08:23:12.878202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.339 [2024-11-20 08:23:12.894196] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:08.339 [2024-11-20 08:23:12.894515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.339 malloc0 00:26:08.340 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:08.340 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2037771 00:26:08.340 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2037771 /var/tmp/bdevperf.sock 00:26:08.340 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:08.340 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2037771 ']' 00:26:08.340 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:08.340 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.340 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:08.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:08.340 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.340 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:08.340 [2024-11-20 08:23:13.024544] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:26:08.340 [2024-11-20 08:23:13.024625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2037771 ] 00:26:08.600 [2024-11-20 08:23:13.097666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.600 [2024-11-20 08:23:13.133986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:09.171 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.171 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:26:09.171 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.BsM 00:26:09.432 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:09.692 [2024-11-20 08:23:14.164621] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:09.692 TLSTESTn1 00:26:09.692 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:09.692 Running I/O for 10 seconds... 00:26:12.018 4067.00 IOPS, 15.89 MiB/s [2024-11-20T07:23:17.694Z] 4855.50 IOPS, 18.97 MiB/s [2024-11-20T07:23:18.636Z] 4787.00 IOPS, 18.70 MiB/s [2024-11-20T07:23:19.579Z] 4935.00 IOPS, 19.28 MiB/s [2024-11-20T07:23:20.523Z] 5043.40 IOPS, 19.70 MiB/s [2024-11-20T07:23:21.466Z] 5124.00 IOPS, 20.02 MiB/s [2024-11-20T07:23:22.408Z] 5220.29 IOPS, 20.39 MiB/s [2024-11-20T07:23:23.792Z] 5216.50 IOPS, 20.38 MiB/s [2024-11-20T07:23:24.735Z] 5228.67 IOPS, 20.42 MiB/s [2024-11-20T07:23:24.735Z] 5236.90 IOPS, 20.46 MiB/s 00:26:20.006 Latency(us) 00:26:20.006 [2024-11-20T07:23:24.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.006 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:20.006 Verification LBA range: start 0x0 length 0x2000 00:26:20.006 TLSTESTn1 : 10.01 5241.88 20.48 0.00 0.00 24386.84 6225.92 71652.69 00:26:20.006 [2024-11-20T07:23:24.736Z] =================================================================================================================== 00:26:20.007 [2024-11-20T07:23:24.736Z] Total : 5241.88 20.48 0.00 0.00 24386.84 6225.92 71652.69 00:26:20.007 { 00:26:20.007 "results": [ 00:26:20.007 { 00:26:20.007 "job": "TLSTESTn1", 00:26:20.007 "core_mask": "0x4", 00:26:20.007 "workload": "verify", 00:26:20.007 "status": "finished", 00:26:20.007 "verify_range": { 00:26:20.007 "start": 0, 00:26:20.007 "length": 8192 00:26:20.007 }, 00:26:20.007 "queue_depth": 128, 00:26:20.007 "io_size": 4096, 00:26:20.007 "runtime": 10.01492, 00:26:20.007 "iops": 5241.879116358394, 00:26:20.007 "mibps": 20.476090298274976, 00:26:20.007 "io_failed": 0, 00:26:20.007 "io_timeout": 0, 00:26:20.007 "avg_latency_us": 24386.83695093688, 00:26:20.007 "min_latency_us": 6225.92, 00:26:20.007 "max_latency_us": 71652.69333333333 00:26:20.007 } 00:26:20.007 ], 00:26:20.007 "core_count": 1 00:26:20.007 } 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:20.007 nvmf_trace.0 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2037771 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2037771 ']' 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2037771 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2037771 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2037771' 00:26:20.007 killing process with pid 2037771 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2037771 00:26:20.007 Received shutdown signal, test time was about 10.000000 seconds 00:26:20.007 00:26:20.007 Latency(us) 00:26:20.007 [2024-11-20T07:23:24.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.007 [2024-11-20T07:23:24.736Z] =================================================================================================================== 00:26:20.007 [2024-11-20T07:23:24.736Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2037771 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@99 -- # sync 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@102 -- # set +e 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:20.007 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:20.007 rmmod nvme_tcp 00:26:20.007 rmmod nvme_fabrics 00:26:20.007 rmmod nvme_keyring 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@106 -- # set -e 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@107 -- # return 0 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # '[' -n 2037424 ']' 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@337 -- # killprocess 2037424 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2037424 ']' 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2037424 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2037424 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2037424' 00:26:20.269 killing process with pid 2037424 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2037424 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2037424 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # nvmf_fini 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@254 -- # local dev 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:20.269 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:22.819 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:22.819 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # return 0 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # _dev=0 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # dev_map=() 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@274 -- # iptr 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-save 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-restore 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.BsM 00:26:22.819 00:26:22.819 real 0m24.217s 00:26:22.819 user 0m24.468s 00:26:22.819 sys 0m11.009s 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:22.819 ************************************ 00:26:22.819 END TEST nvmf_fips 00:26:22.819 ************************************ 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:22.819 ************************************ 00:26:22.819 START TEST nvmf_control_msg_list 00:26:22.819 ************************************ 00:26:22.819 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:26:22.820 * Looking for test storage... 00:26:22.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:22.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.820 --rc genhtml_branch_coverage=1 00:26:22.820 --rc genhtml_function_coverage=1 00:26:22.820 --rc genhtml_legend=1 00:26:22.820 --rc geninfo_all_blocks=1 00:26:22.820 --rc geninfo_unexecuted_blocks=1 00:26:22.820 00:26:22.820 ' 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:22.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.820 --rc genhtml_branch_coverage=1 00:26:22.820 --rc genhtml_function_coverage=1 00:26:22.820 --rc genhtml_legend=1 00:26:22.820 --rc geninfo_all_blocks=1 00:26:22.820 --rc geninfo_unexecuted_blocks=1 00:26:22.820 00:26:22.820 ' 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:22.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.820 --rc genhtml_branch_coverage=1 00:26:22.820 --rc genhtml_function_coverage=1 00:26:22.820 --rc genhtml_legend=1 00:26:22.820 --rc geninfo_all_blocks=1 00:26:22.820 --rc geninfo_unexecuted_blocks=1 00:26:22.820 00:26:22.820 ' 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:22.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.820 --rc genhtml_branch_coverage=1 00:26:22.820 --rc genhtml_function_coverage=1 00:26:22.820 --rc genhtml_legend=1 00:26:22.820 --rc geninfo_all_blocks=1 00:26:22.820 --rc geninfo_unexecuted_blocks=1 00:26:22.820 00:26:22.820 ' 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@50 -- # : 0 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:22.820 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:22.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # remove_target_ns 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # xtrace_disable 00:26:22.821 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # pci_devs=() 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # net_devs=() 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # e810=() 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # local -ga e810 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # x722=() 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # local -ga x722 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # mlx=() 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # local -ga mlx 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:30.970 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:30.970 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.970 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:30.971 Found net devices under 0000:31:00.0: cvl_0_0 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:30.971 Found net devices under 0000:31:00.1: cvl_0_1 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # is_hw=yes 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@247 -- # create_target_ns 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@28 -- # local -g _dev 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772161 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:30.971 10.0.0.1 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772162 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:30.971 10.0.0.2 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:30.971 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:30.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.543 ms 00:26:30.972 00:26:30.972 --- 10.0.0.1 ping statistics --- 00:26:30.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.972 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:30.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:26:30.972 00:26:30.972 --- 10.0.0.2 ping statistics --- 00:26:30.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.972 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@270 -- # return 0 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:30.972 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:30.973 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:30.973 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:30.973 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # return 1 00:26:30.973 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev= 00:26:30.973 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@160 -- # return 0 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target1 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # return 1 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev= 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@160 -- # return 0 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # nvmfpid=2044824 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@329 -- # waitforlisten 2044824 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2044824 ']' 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.234 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:31.234 [2024-11-20 08:23:35.835644] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:26:31.234 [2024-11-20 08:23:35.835713] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.234 [2024-11-20 08:23:35.926126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.495 [2024-11-20 08:23:35.965829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:31.495 [2024-11-20 08:23:35.965871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:31.495 [2024-11-20 08:23:35.965879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:31.495 [2024-11-20 08:23:35.965886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:31.495 [2024-11-20 08:23:35.965892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:31.495 [2024-11-20 08:23:35.966524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:32.067 [2024-11-20 08:23:36.671472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:32.067 Malloc0 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:32.067 [2024-11-20 08:23:36.722360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2044871 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2044873 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2044874 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2044871 00:26:32.067 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:32.067 [2024-11-20 08:23:36.792803] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:32.327 [2024-11-20 08:23:36.822812] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:32.327 [2024-11-20 08:23:36.823074] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:33.292 Initializing NVMe Controllers 00:26:33.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:33.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:26:33.292 Initialization complete. Launching workers. 00:26:33.292 ======================================================== 00:26:33.292 Latency(us) 00:26:33.292 Device Information : IOPS MiB/s Average min max 00:26:33.292 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40905.25 40832.97 40984.42 00:26:33.292 ======================================================== 00:26:33.292 Total : 25.00 0.10 40905.25 40832.97 40984.42 00:26:33.292 00:26:33.292 Initializing NVMe Controllers 00:26:33.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:33.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:26:33.292 Initialization complete. Launching workers. 00:26:33.292 ======================================================== 00:26:33.292 Latency(us) 00:26:33.292 Device Information : IOPS MiB/s Average min max 00:26:33.292 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40895.27 40747.91 40952.04 00:26:33.292 ======================================================== 00:26:33.292 Total : 25.00 0.10 40895.27 40747.91 40952.04 00:26:33.292 00:26:33.292 Initializing NVMe Controllers 00:26:33.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:33.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:26:33.292 Initialization complete. Launching workers. 00:26:33.292 ======================================================== 00:26:33.292 Latency(us) 00:26:33.292 Device Information : IOPS MiB/s Average min max 00:26:33.292 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40918.18 40837.56 41290.38 00:26:33.292 ======================================================== 00:26:33.292 Total : 25.00 0.10 40918.18 40837.56 41290.38 00:26:33.292 00:26:33.292 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2044873 00:26:33.292 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2044874 00:26:33.292 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:33.292 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:26:33.292 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:33.292 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@99 -- # sync 00:26:33.292 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:33.292 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@102 -- # set +e 00:26:33.292 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:33.292 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:33.292 rmmod nvme_tcp 00:26:33.292 rmmod nvme_fabrics 00:26:33.292 rmmod nvme_keyring 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@106 -- # set -e 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@107 -- # return 0 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # '[' -n 2044824 ']' 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@337 -- # killprocess 2044824 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2044824 ']' 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2044824 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2044824 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2044824' 00:26:33.554 killing process with pid 2044824 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2044824 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2044824 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # nvmf_fini 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@254 -- # local dev 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:33.554 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # return 0 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # _dev=0 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # dev_map=() 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@274 -- # iptr 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-save 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-restore 00:26:36.103 00:26:36.103 real 0m13.198s 00:26:36.103 user 0m8.144s 00:26:36.103 sys 0m7.050s 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:36.103 ************************************ 00:26:36.103 END TEST nvmf_control_msg_list 00:26:36.103 ************************************ 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:36.103 ************************************ 00:26:36.103 START TEST nvmf_wait_for_buf 00:26:36.103 ************************************ 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:26:36.103 * Looking for test storage... 00:26:36.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:36.103 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:36.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.103 --rc genhtml_branch_coverage=1 00:26:36.103 --rc genhtml_function_coverage=1 00:26:36.103 --rc genhtml_legend=1 00:26:36.104 --rc geninfo_all_blocks=1 00:26:36.104 --rc geninfo_unexecuted_blocks=1 00:26:36.104 00:26:36.104 ' 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:36.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.104 --rc genhtml_branch_coverage=1 00:26:36.104 --rc genhtml_function_coverage=1 00:26:36.104 --rc genhtml_legend=1 00:26:36.104 --rc geninfo_all_blocks=1 00:26:36.104 --rc geninfo_unexecuted_blocks=1 00:26:36.104 00:26:36.104 ' 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:36.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.104 --rc genhtml_branch_coverage=1 00:26:36.104 --rc genhtml_function_coverage=1 00:26:36.104 --rc genhtml_legend=1 00:26:36.104 --rc geninfo_all_blocks=1 00:26:36.104 --rc geninfo_unexecuted_blocks=1 00:26:36.104 00:26:36.104 ' 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:36.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.104 --rc genhtml_branch_coverage=1 00:26:36.104 --rc genhtml_function_coverage=1 00:26:36.104 --rc genhtml_legend=1 00:26:36.104 --rc geninfo_all_blocks=1 00:26:36.104 --rc geninfo_unexecuted_blocks=1 00:26:36.104 00:26:36.104 ' 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@50 -- # : 0 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:36.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@260 -- # remove_target_ns 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # xtrace_disable 00:26:36.104 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # pci_devs=() 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # net_devs=() 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # e810=() 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # local -ga e810 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # x722=() 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # local -ga x722 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # mlx=() 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # local -ga mlx 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:44.251 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:44.251 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:44.252 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:44.252 Found net devices under 0000:31:00.0: cvl_0_0 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:44.252 Found net devices under 0000:31:00.1: cvl_0_1 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # is_hw=yes 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@247 -- # create_target_ns 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@28 -- # local -g _dev 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772161 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:44.252 10.0.0.1 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772162 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:44.252 10.0.0.2 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:44.252 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:44.253 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:44.253 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:44.253 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:44.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.637 ms 00:26:44.516 00:26:44.516 --- 10.0.0.1 ping statistics --- 00:26:44.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.516 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:44.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:26:44.516 00:26:44.516 --- 10.0.0.2 ping statistics --- 00:26:44.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.516 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@270 -- # return 0 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:44.516 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # return 1 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev= 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@160 -- # return 0 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target1 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # return 1 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev= 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@160 -- # return 0 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:44.517 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:44.778 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:26:44.778 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:44.778 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:44.778 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:44.778 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # nvmfpid=2049921 00:26:44.778 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@329 -- # waitforlisten 2049921 00:26:44.778 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:44.778 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2049921 ']' 00:26:44.778 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.778 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:44.778 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.778 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:44.778 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:44.778 [2024-11-20 08:23:49.314288] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:26:44.778 [2024-11-20 08:23:49.314360] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.778 [2024-11-20 08:23:49.406462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.778 [2024-11-20 08:23:49.446623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.778 [2024-11-20 08:23:49.446662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.778 [2024-11-20 08:23:49.446670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.778 [2024-11-20 08:23:49.446677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.778 [2024-11-20 08:23:49.446683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.778 [2024-11-20 08:23:49.447294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:45.721 Malloc0 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:45.721 [2024-11-20 08:23:50.241784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:45.721 [2024-11-20 08:23:50.278034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.721 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:45.721 [2024-11-20 08:23:50.379526] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:47.105 Initializing NVMe Controllers 00:26:47.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:47.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:26:47.105 Initialization complete. Launching workers. 00:26:47.105 ======================================================== 00:26:47.105 Latency(us) 00:26:47.105 Device Information : IOPS MiB/s Average min max 00:26:47.105 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33597.78 8993.26 103760.90 00:26:47.105 ======================================================== 00:26:47.105 Total : 124.00 15.50 33597.78 8993.26 103760.90 00:26:47.105 00:26:47.105 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:26:47.105 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:26:47.105 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.105 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:47.105 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@99 -- # sync 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@102 -- # set +e 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:47.367 rmmod nvme_tcp 00:26:47.367 rmmod nvme_fabrics 00:26:47.367 rmmod nvme_keyring 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@106 -- # set -e 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@107 -- # return 0 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # '[' -n 2049921 ']' 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@337 -- # killprocess 2049921 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2049921 ']' 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2049921 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2049921 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2049921' 00:26:47.367 killing process with pid 2049921 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2049921 00:26:47.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2049921 00:26:47.628 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:47.628 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # nvmf_fini 00:26:47.628 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@254 -- # local dev 00:26:47.628 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:47.628 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:47.628 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:47.628 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:49.540 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:49.540 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:49.540 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # return 0 00:26:49.540 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:49.540 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:49.540 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:49.540 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:26:49.540 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:26:49.540 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:49.540 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:26:49.540 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # _dev=0 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # dev_map=() 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@274 -- # iptr 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-save 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-restore 00:26:49.541 00:26:49.541 real 0m13.808s 00:26:49.541 user 0m5.486s 00:26:49.541 sys 0m6.884s 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:49.541 ************************************ 00:26:49.541 END TEST nvmf_wait_for_buf 00:26:49.541 ************************************ 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@125 -- # xtrace_disable 00:26:49.541 08:23:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # pci_devs=() 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # net_devs=() 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # e810=() 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # local -ga e810 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # x722=() 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # local -ga x722 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # mlx=() 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # local -ga mlx 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:57.756 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:57.756 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:57.756 Found net devices under 0000:31:00.0: cvl_0_0 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:57.756 Found net devices under 0000:31:00.1: cvl_0_1 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:57.756 ************************************ 00:26:57.756 START TEST nvmf_perf_adq 00:26:57.756 ************************************ 00:26:57.756 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:57.756 * Looking for test storage... 00:26:57.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:57.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.757 --rc genhtml_branch_coverage=1 00:26:57.757 --rc genhtml_function_coverage=1 00:26:57.757 --rc genhtml_legend=1 00:26:57.757 --rc geninfo_all_blocks=1 00:26:57.757 --rc geninfo_unexecuted_blocks=1 00:26:57.757 00:26:57.757 ' 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:57.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.757 --rc genhtml_branch_coverage=1 00:26:57.757 --rc genhtml_function_coverage=1 00:26:57.757 --rc genhtml_legend=1 00:26:57.757 --rc geninfo_all_blocks=1 00:26:57.757 --rc geninfo_unexecuted_blocks=1 00:26:57.757 00:26:57.757 ' 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:57.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.757 --rc genhtml_branch_coverage=1 00:26:57.757 --rc genhtml_function_coverage=1 00:26:57.757 --rc genhtml_legend=1 00:26:57.757 --rc geninfo_all_blocks=1 00:26:57.757 --rc geninfo_unexecuted_blocks=1 00:26:57.757 00:26:57.757 ' 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:57.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.757 --rc genhtml_branch_coverage=1 00:26:57.757 --rc genhtml_function_coverage=1 00:26:57.757 --rc genhtml_legend=1 00:26:57.757 --rc geninfo_all_blocks=1 00:26:57.757 --rc geninfo_unexecuted_blocks=1 00:26:57.757 00:26:57.757 ' 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@50 -- # : 0 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:57.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:57.757 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:57.758 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:57.758 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:57.758 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:26:57.758 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:05.973 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:05.973 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.973 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:05.973 Found net devices under 0000:31:00.0: cvl_0_0 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:05.974 Found net devices under 0000:31:00.1: cvl_0_1 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:05.974 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:07.891 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:09.806 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.100 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:15.101 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:15.101 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:15.101 Found net devices under 0000:31:00.0: cvl_0_0 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:15.101 Found net devices under 0000:31:00.1: cvl_0_1 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@247 -- # create_target_ns 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:27:15.101 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:15.102 10.0.0.1 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:15.102 10.0.0.2 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:15.102 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:15.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:15.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.403 ms 00:27:15.103 00:27:15.103 --- 10.0.0.1 ping statistics --- 00:27:15.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.103 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:15.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:15.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:27:15.103 00:27:15.103 --- 10.0.0.2 ping statistics --- 00:27:15.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.103 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:15.103 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:15.104 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:15.104 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target1 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=2061942 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 2061942 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2061942 ']' 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:15.365 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:15.365 [2024-11-20 08:24:19.940264] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:27:15.365 [2024-11-20 08:24:19.940331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.365 [2024-11-20 08:24:20.035400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:15.365 [2024-11-20 08:24:20.081732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.365 [2024-11-20 08:24:20.081773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.365 [2024-11-20 08:24:20.081781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:15.365 [2024-11-20 08:24:20.081788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:15.365 [2024-11-20 08:24:20.081794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.365 [2024-11-20 08:24:20.083626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.365 [2024-11-20 08:24:20.083741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:15.365 [2024-11-20 08:24:20.083910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.365 [2024-11-20 08:24:20.083910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.307 [2024-11-20 08:24:20.921488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.307 Malloc1 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.307 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.307 [2024-11-20 08:24:20.989246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.308 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.308 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2062130 00:27:16.308 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:16.308 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:18.853 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:18.853 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.853 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.853 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.853 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:18.853 "tick_rate": 2400000000, 00:27:18.853 "poll_groups": [ 00:27:18.853 { 00:27:18.853 "name": "nvmf_tgt_poll_group_000", 00:27:18.853 "admin_qpairs": 1, 00:27:18.853 "io_qpairs": 1, 00:27:18.853 "current_admin_qpairs": 1, 00:27:18.853 "current_io_qpairs": 1, 00:27:18.853 "pending_bdev_io": 0, 00:27:18.853 "completed_nvme_io": 19576, 00:27:18.853 "transports": [ 00:27:18.853 { 00:27:18.853 "trtype": "TCP" 00:27:18.853 } 00:27:18.853 ] 00:27:18.853 }, 00:27:18.853 { 00:27:18.853 "name": "nvmf_tgt_poll_group_001", 00:27:18.853 "admin_qpairs": 0, 00:27:18.853 "io_qpairs": 1, 00:27:18.853 "current_admin_qpairs": 0, 00:27:18.853 "current_io_qpairs": 1, 00:27:18.853 "pending_bdev_io": 0, 00:27:18.853 "completed_nvme_io": 27008, 00:27:18.853 "transports": [ 00:27:18.853 { 00:27:18.853 "trtype": "TCP" 00:27:18.853 } 00:27:18.853 ] 00:27:18.853 }, 00:27:18.853 { 00:27:18.853 "name": "nvmf_tgt_poll_group_002", 00:27:18.853 "admin_qpairs": 0, 00:27:18.853 "io_qpairs": 1, 00:27:18.853 "current_admin_qpairs": 0, 00:27:18.853 "current_io_qpairs": 1, 00:27:18.853 "pending_bdev_io": 0, 00:27:18.853 "completed_nvme_io": 19377, 00:27:18.853 "transports": [ 00:27:18.853 { 00:27:18.853 "trtype": "TCP" 00:27:18.853 } 00:27:18.853 ] 00:27:18.853 }, 00:27:18.853 { 00:27:18.853 "name": "nvmf_tgt_poll_group_003", 00:27:18.853 "admin_qpairs": 0, 00:27:18.853 "io_qpairs": 1, 00:27:18.853 "current_admin_qpairs": 0, 00:27:18.853 "current_io_qpairs": 1, 00:27:18.853 "pending_bdev_io": 0, 00:27:18.853 "completed_nvme_io": 19566, 00:27:18.853 "transports": [ 00:27:18.853 { 00:27:18.853 "trtype": "TCP" 00:27:18.853 } 00:27:18.853 ] 00:27:18.853 } 00:27:18.853 ] 00:27:18.853 }' 00:27:18.854 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:18.854 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:18.854 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:18.854 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:18.854 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2062130 00:27:26.997 Initializing NVMe Controllers 00:27:26.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:26.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:26.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:26.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:26.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:26.998 Initialization complete. Launching workers. 00:27:26.998 ======================================================== 00:27:26.998 Latency(us) 00:27:26.998 Device Information : IOPS MiB/s Average min max 00:27:26.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13459.68 52.58 4755.25 1360.04 8883.83 00:27:26.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14633.25 57.16 4373.19 1248.33 10104.83 00:27:26.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13238.89 51.71 4834.32 1391.68 11262.18 00:27:26.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11056.34 43.19 5787.71 1353.70 11617.94 00:27:26.998 ======================================================== 00:27:26.998 Total : 52388.16 204.64 4886.41 1248.33 11617.94 00:27:26.998 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:26.998 rmmod nvme_tcp 00:27:26.998 rmmod nvme_fabrics 00:27:26.998 rmmod nvme_keyring 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 2061942 ']' 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 2061942 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2061942 ']' 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2061942 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2061942 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2061942' 00:27:26.998 killing process with pid 2061942 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2061942 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2061942 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@254 -- # local dev 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:26.998 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # return 0 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@274 -- # iptr 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-save 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-restore 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:28.915 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:30.830 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:32.744 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.038 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:38.039 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:38.039 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:38.039 Found net devices under 0000:31:00.0: cvl_0_0 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:38.039 Found net devices under 0000:31:00.1: cvl_0_1 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@247 -- # create_target_ns 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:38.039 10.0.0.1 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:38.039 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:38.040 10.0.0.2 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:38.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.557 ms 00:27:38.040 00:27:38.040 --- 10.0.0.1 ping statistics --- 00:27:38.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.040 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:38.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:27:38.040 00:27:38.040 --- 10.0.0.2 ping statistics --- 00:27:38.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.040 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:38.040 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target1 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec nvmf_ns_spdk ethtool --offload cvl_0_1 hw-tc-offload on 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec nvmf_ns_spdk ethtool --set-priv-flags cvl_0_1 channel-pkt-inspect-optimize off 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:38.041 net.core.busy_poll = 1 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:38.041 net.core.busy_read = 1 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:38.041 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:38.302 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 ingress 00:27:38.302 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc filter add dev cvl_0_1 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:38.302 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_1 00:27:38.302 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:38.302 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:38.303 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:38.303 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.303 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=2066875 00:27:38.303 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 2066875 00:27:38.303 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:38.303 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2066875 ']' 00:27:38.303 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.303 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.303 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.303 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.303 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.303 [2024-11-20 08:24:42.998067] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:27:38.303 [2024-11-20 08:24:42.998134] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.563 [2024-11-20 08:24:43.084500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:38.563 [2024-11-20 08:24:43.120376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.563 [2024-11-20 08:24:43.120408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.563 [2024-11-20 08:24:43.120416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.563 [2024-11-20 08:24:43.120423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.563 [2024-11-20 08:24:43.120428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.563 [2024-11-20 08:24:43.121907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.563 [2024-11-20 08:24:43.122125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.563 [2024-11-20 08:24:43.122125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:38.563 [2024-11-20 08:24:43.121982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.135 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:39.135 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:39.135 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:39.135 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:39.135 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.135 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.135 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:27:39.135 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:39.135 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.135 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.135 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:39.135 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.395 [2024-11-20 08:24:43.951673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.395 Malloc1 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.395 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.395 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.396 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:39.396 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.396 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.396 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.396 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.396 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.396 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.396 [2024-11-20 08:24:44.021200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.396 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.396 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2066975 00:27:39.396 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:27:39.396 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:41.309 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:27:41.309 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.309 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:41.569 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.569 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:27:41.569 "tick_rate": 2400000000, 00:27:41.569 "poll_groups": [ 00:27:41.569 { 00:27:41.569 "name": "nvmf_tgt_poll_group_000", 00:27:41.569 "admin_qpairs": 1, 00:27:41.569 "io_qpairs": 3, 00:27:41.569 "current_admin_qpairs": 1, 00:27:41.569 "current_io_qpairs": 3, 00:27:41.569 "pending_bdev_io": 0, 00:27:41.569 "completed_nvme_io": 30106, 00:27:41.569 "transports": [ 00:27:41.569 { 00:27:41.569 "trtype": "TCP" 00:27:41.569 } 00:27:41.569 ] 00:27:41.569 }, 00:27:41.569 { 00:27:41.569 "name": "nvmf_tgt_poll_group_001", 00:27:41.569 "admin_qpairs": 0, 00:27:41.569 "io_qpairs": 1, 00:27:41.569 "current_admin_qpairs": 0, 00:27:41.569 "current_io_qpairs": 1, 00:27:41.569 "pending_bdev_io": 0, 00:27:41.569 "completed_nvme_io": 34056, 00:27:41.569 "transports": [ 00:27:41.569 { 00:27:41.569 "trtype": "TCP" 00:27:41.569 } 00:27:41.569 ] 00:27:41.569 }, 00:27:41.569 { 00:27:41.569 "name": "nvmf_tgt_poll_group_002", 00:27:41.569 "admin_qpairs": 0, 00:27:41.569 "io_qpairs": 0, 00:27:41.569 "current_admin_qpairs": 0, 00:27:41.569 "current_io_qpairs": 0, 00:27:41.569 "pending_bdev_io": 0, 00:27:41.569 "completed_nvme_io": 0, 00:27:41.569 "transports": [ 00:27:41.569 { 00:27:41.569 "trtype": "TCP" 00:27:41.569 } 00:27:41.569 ] 00:27:41.569 }, 00:27:41.569 { 00:27:41.569 "name": "nvmf_tgt_poll_group_003", 00:27:41.569 "admin_qpairs": 0, 00:27:41.569 "io_qpairs": 0, 00:27:41.569 "current_admin_qpairs": 0, 00:27:41.569 "current_io_qpairs": 0, 00:27:41.569 "pending_bdev_io": 0, 00:27:41.569 "completed_nvme_io": 0, 00:27:41.569 "transports": [ 00:27:41.569 { 00:27:41.569 "trtype": "TCP" 00:27:41.569 } 00:27:41.569 ] 00:27:41.569 } 00:27:41.569 ] 00:27:41.569 }' 00:27:41.569 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:41.569 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:27:41.569 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:27:41.569 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:27:41.569 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2066975 00:27:49.810 Initializing NVMe Controllers 00:27:49.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:49.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:49.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:49.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:49.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:49.810 Initialization complete. Launching workers. 00:27:49.810 ======================================================== 00:27:49.810 Latency(us) 00:27:49.810 Device Information : IOPS MiB/s Average min max 00:27:49.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6777.40 26.47 9471.06 1345.31 57774.93 00:27:49.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5953.90 23.26 10749.41 1276.10 57947.18 00:27:49.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 18863.00 73.68 3402.91 1141.58 45241.03 00:27:49.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7572.30 29.58 8458.59 1362.17 55991.72 00:27:49.810 ======================================================== 00:27:49.810 Total : 39166.59 152.99 6547.16 1141.58 57947.18 00:27:49.810 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:49.810 rmmod nvme_tcp 00:27:49.810 rmmod nvme_fabrics 00:27:49.810 rmmod nvme_keyring 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 2066875 ']' 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 2066875 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2066875 ']' 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2066875 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2066875 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2066875' 00:27:49.810 killing process with pid 2066875 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2066875 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2066875 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@254 -- # local dev 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:49.810 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # return 0 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@274 -- # iptr 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-restore 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-save 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:27:53.107 00:27:53.107 real 0m55.419s 00:27:53.107 user 2m50.262s 00:27:53.107 sys 0m12.486s 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.107 ************************************ 00:27:53.107 END TEST nvmf_perf_adq 00:27:53.107 ************************************ 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:53.107 ************************************ 00:27:53.107 START TEST nvmf_shutdown 00:27:53.107 ************************************ 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:53.107 * Looking for test storage... 00:27:53.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:53.107 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:53.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.368 --rc genhtml_branch_coverage=1 00:27:53.368 --rc genhtml_function_coverage=1 00:27:53.368 --rc genhtml_legend=1 00:27:53.368 --rc geninfo_all_blocks=1 00:27:53.368 --rc geninfo_unexecuted_blocks=1 00:27:53.368 00:27:53.368 ' 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:53.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.368 --rc genhtml_branch_coverage=1 00:27:53.368 --rc genhtml_function_coverage=1 00:27:53.368 --rc genhtml_legend=1 00:27:53.368 --rc geninfo_all_blocks=1 00:27:53.368 --rc geninfo_unexecuted_blocks=1 00:27:53.368 00:27:53.368 ' 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:53.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.368 --rc genhtml_branch_coverage=1 00:27:53.368 --rc genhtml_function_coverage=1 00:27:53.368 --rc genhtml_legend=1 00:27:53.368 --rc geninfo_all_blocks=1 00:27:53.368 --rc geninfo_unexecuted_blocks=1 00:27:53.368 00:27:53.368 ' 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:53.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.368 --rc genhtml_branch_coverage=1 00:27:53.368 --rc genhtml_function_coverage=1 00:27:53.368 --rc genhtml_legend=1 00:27:53.368 --rc geninfo_all_blocks=1 00:27:53.368 --rc geninfo_unexecuted_blocks=1 00:27:53.368 00:27:53.368 ' 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:53.368 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@50 -- # : 0 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:53.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:53.369 ************************************ 00:27:53.369 START TEST nvmf_shutdown_tc1 00:27:53.369 ************************************ 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # remove_target_ns 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # xtrace_disable 00:27:53.369 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # pci_devs=() 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # net_devs=() 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # e810=() 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # local -ga e810 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # x722=() 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # local -ga x722 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # mlx=() 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # local -ga mlx 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.511 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:01.512 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:01.512 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:01.512 Found net devices under 0000:31:00.0: cvl_0_0 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:01.512 Found net devices under 0000:31:00.1: cvl_0_1 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # is_hw=yes 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@247 -- # create_target_ns 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@28 -- # local -g _dev 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # ips=() 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:01.512 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772161 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:01.512 10.0.0.1 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:01.512 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772162 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:01.513 10.0.0.2 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:01.513 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:01.774 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:01.774 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:01.774 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:01.774 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:01.774 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:01.774 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:01.774 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:01.774 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:01.774 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:01.774 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:01.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.507 ms 00:28:01.775 00:28:01.775 --- 10.0.0.1 ping statistics --- 00:28:01.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.775 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:01.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:28:01.775 00:28:01.775 --- 10.0.0.2 ping statistics --- 00:28:01.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.775 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # return 0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # return 1 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev= 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@160 -- # return 0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:01.775 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=target1 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # return 1 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev= 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@160 -- # return 0 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # nvmfpid=2074149 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # waitforlisten 2074149 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2074149 ']' 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:01.776 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:01.776 [2024-11-20 08:25:06.463241] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:28:01.776 [2024-11-20 08:25:06.463306] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.036 [2024-11-20 08:25:06.572479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:02.036 [2024-11-20 08:25:06.624709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.036 [2024-11-20 08:25:06.624761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.036 [2024-11-20 08:25:06.624770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.036 [2024-11-20 08:25:06.624778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.036 [2024-11-20 08:25:06.624784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.036 [2024-11-20 08:25:06.626833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.036 [2024-11-20 08:25:06.626999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:02.036 [2024-11-20 08:25:06.627282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:02.036 [2024-11-20 08:25:06.627284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.606 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:02.606 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:02.606 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:02.606 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:02.606 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:02.606 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.606 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:02.606 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.606 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:02.606 [2024-11-20 08:25:07.327146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.866 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.866 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:02.866 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:02.866 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:02.866 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:02.866 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:02.866 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.866 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:02.866 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.866 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:02.866 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.866 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.867 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:02.867 Malloc1 00:28:02.867 [2024-11-20 08:25:07.452716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.867 Malloc2 00:28:02.867 Malloc3 00:28:02.867 Malloc4 00:28:02.867 Malloc5 00:28:03.127 Malloc6 00:28:03.127 Malloc7 00:28:03.127 Malloc8 00:28:03.127 Malloc9 00:28:03.127 Malloc10 00:28:03.127 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.127 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:03.127 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:03.127 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:03.388 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2074524 00:28:03.388 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2074524 /var/tmp/bdevperf.sock 00:28:03.388 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2074524 ']' 00:28:03.388 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:03.388 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:03.388 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:03.388 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:03.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:03.388 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:03.388 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:03.388 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:03.388 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:28:03.388 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:28:03.388 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:03.389 { 00:28:03.389 "params": { 00:28:03.389 "name": "Nvme$subsystem", 00:28:03.389 "trtype": "$TEST_TRANSPORT", 00:28:03.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.389 "adrfam": "ipv4", 00:28:03.389 "trsvcid": "$NVMF_PORT", 00:28:03.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.389 "hdgst": ${hdgst:-false}, 00:28:03.389 "ddgst": ${ddgst:-false} 00:28:03.389 }, 00:28:03.389 "method": "bdev_nvme_attach_controller" 00:28:03.389 } 00:28:03.389 EOF 00:28:03.389 )") 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:03.389 { 00:28:03.389 "params": { 00:28:03.389 "name": "Nvme$subsystem", 00:28:03.389 "trtype": "$TEST_TRANSPORT", 00:28:03.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.389 "adrfam": "ipv4", 00:28:03.389 "trsvcid": "$NVMF_PORT", 00:28:03.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.389 "hdgst": ${hdgst:-false}, 00:28:03.389 "ddgst": ${ddgst:-false} 00:28:03.389 }, 00:28:03.389 "method": "bdev_nvme_attach_controller" 00:28:03.389 } 00:28:03.389 EOF 00:28:03.389 )") 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:03.389 { 00:28:03.389 "params": { 00:28:03.389 "name": "Nvme$subsystem", 00:28:03.389 "trtype": "$TEST_TRANSPORT", 00:28:03.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.389 "adrfam": "ipv4", 00:28:03.389 "trsvcid": "$NVMF_PORT", 00:28:03.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.389 "hdgst": ${hdgst:-false}, 00:28:03.389 "ddgst": ${ddgst:-false} 00:28:03.389 }, 00:28:03.389 "method": "bdev_nvme_attach_controller" 00:28:03.389 } 00:28:03.389 EOF 00:28:03.389 )") 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:03.389 { 00:28:03.389 "params": { 00:28:03.389 "name": "Nvme$subsystem", 00:28:03.389 "trtype": "$TEST_TRANSPORT", 00:28:03.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.389 "adrfam": "ipv4", 00:28:03.389 "trsvcid": "$NVMF_PORT", 00:28:03.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.389 "hdgst": ${hdgst:-false}, 00:28:03.389 "ddgst": ${ddgst:-false} 00:28:03.389 }, 00:28:03.389 "method": "bdev_nvme_attach_controller" 00:28:03.389 } 00:28:03.389 EOF 00:28:03.389 )") 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:03.389 { 00:28:03.389 "params": { 00:28:03.389 "name": "Nvme$subsystem", 00:28:03.389 "trtype": "$TEST_TRANSPORT", 00:28:03.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.389 "adrfam": "ipv4", 00:28:03.389 "trsvcid": "$NVMF_PORT", 00:28:03.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.389 "hdgst": ${hdgst:-false}, 00:28:03.389 "ddgst": ${ddgst:-false} 00:28:03.389 }, 00:28:03.389 "method": "bdev_nvme_attach_controller" 00:28:03.389 } 00:28:03.389 EOF 00:28:03.389 )") 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:03.389 { 00:28:03.389 "params": { 00:28:03.389 "name": "Nvme$subsystem", 00:28:03.389 "trtype": "$TEST_TRANSPORT", 00:28:03.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.389 "adrfam": "ipv4", 00:28:03.389 "trsvcid": "$NVMF_PORT", 00:28:03.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.389 "hdgst": ${hdgst:-false}, 00:28:03.389 "ddgst": ${ddgst:-false} 00:28:03.389 }, 00:28:03.389 "method": "bdev_nvme_attach_controller" 00:28:03.389 } 00:28:03.389 EOF 00:28:03.389 )") 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:03.389 [2024-11-20 08:25:07.903994] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:28:03.389 [2024-11-20 08:25:07.904046] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:03.389 { 00:28:03.389 "params": { 00:28:03.389 "name": "Nvme$subsystem", 00:28:03.389 "trtype": "$TEST_TRANSPORT", 00:28:03.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.389 "adrfam": "ipv4", 00:28:03.389 "trsvcid": "$NVMF_PORT", 00:28:03.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.389 "hdgst": ${hdgst:-false}, 00:28:03.389 "ddgst": ${ddgst:-false} 00:28:03.389 }, 00:28:03.389 "method": "bdev_nvme_attach_controller" 00:28:03.389 } 00:28:03.389 EOF 00:28:03.389 )") 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:03.389 { 00:28:03.389 "params": { 00:28:03.389 "name": "Nvme$subsystem", 00:28:03.389 "trtype": "$TEST_TRANSPORT", 00:28:03.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.389 "adrfam": "ipv4", 00:28:03.389 "trsvcid": "$NVMF_PORT", 00:28:03.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.389 "hdgst": ${hdgst:-false}, 00:28:03.389 "ddgst": ${ddgst:-false} 00:28:03.389 }, 00:28:03.389 "method": "bdev_nvme_attach_controller" 00:28:03.389 } 00:28:03.389 EOF 00:28:03.389 )") 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:03.389 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:03.390 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:03.390 { 00:28:03.390 "params": { 00:28:03.390 "name": "Nvme$subsystem", 00:28:03.390 "trtype": "$TEST_TRANSPORT", 00:28:03.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.390 "adrfam": "ipv4", 00:28:03.390 "trsvcid": "$NVMF_PORT", 00:28:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.390 "hdgst": ${hdgst:-false}, 00:28:03.390 "ddgst": ${ddgst:-false} 00:28:03.390 }, 00:28:03.390 "method": "bdev_nvme_attach_controller" 00:28:03.390 } 00:28:03.390 EOF 00:28:03.390 )") 00:28:03.390 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:03.390 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:03.390 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:03.390 { 00:28:03.390 "params": { 00:28:03.390 "name": "Nvme$subsystem", 00:28:03.390 "trtype": "$TEST_TRANSPORT", 00:28:03.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.390 "adrfam": "ipv4", 00:28:03.390 "trsvcid": "$NVMF_PORT", 00:28:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.390 "hdgst": ${hdgst:-false}, 00:28:03.390 "ddgst": ${ddgst:-false} 00:28:03.390 }, 00:28:03.390 "method": "bdev_nvme_attach_controller" 00:28:03.390 } 00:28:03.390 EOF 00:28:03.390 )") 00:28:03.390 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:03.390 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:28:03.390 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:28:03.390 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:28:03.390 "params": { 00:28:03.390 "name": "Nvme1", 00:28:03.390 "trtype": "tcp", 00:28:03.390 "traddr": "10.0.0.2", 00:28:03.390 "adrfam": "ipv4", 00:28:03.390 "trsvcid": "4420", 00:28:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:03.390 "hdgst": false, 00:28:03.390 "ddgst": false 00:28:03.390 }, 00:28:03.390 "method": "bdev_nvme_attach_controller" 00:28:03.390 },{ 00:28:03.390 "params": { 00:28:03.390 "name": "Nvme2", 00:28:03.390 "trtype": "tcp", 00:28:03.390 "traddr": "10.0.0.2", 00:28:03.390 "adrfam": "ipv4", 00:28:03.390 "trsvcid": "4420", 00:28:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:03.390 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:03.390 "hdgst": false, 00:28:03.390 "ddgst": false 00:28:03.390 }, 00:28:03.390 "method": "bdev_nvme_attach_controller" 00:28:03.390 },{ 00:28:03.390 "params": { 00:28:03.390 "name": "Nvme3", 00:28:03.390 "trtype": "tcp", 00:28:03.390 "traddr": "10.0.0.2", 00:28:03.390 "adrfam": "ipv4", 00:28:03.390 "trsvcid": "4420", 00:28:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:03.390 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:03.390 "hdgst": false, 00:28:03.390 "ddgst": false 00:28:03.390 }, 00:28:03.390 "method": "bdev_nvme_attach_controller" 00:28:03.390 },{ 00:28:03.390 "params": { 00:28:03.390 "name": "Nvme4", 00:28:03.390 "trtype": "tcp", 00:28:03.390 "traddr": "10.0.0.2", 00:28:03.390 "adrfam": "ipv4", 00:28:03.390 "trsvcid": "4420", 00:28:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:03.390 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:03.390 "hdgst": false, 00:28:03.390 "ddgst": false 00:28:03.390 }, 00:28:03.390 "method": "bdev_nvme_attach_controller" 00:28:03.390 },{ 00:28:03.390 "params": { 00:28:03.390 "name": "Nvme5", 00:28:03.390 "trtype": "tcp", 00:28:03.390 "traddr": "10.0.0.2", 00:28:03.390 "adrfam": "ipv4", 00:28:03.390 "trsvcid": "4420", 00:28:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:03.390 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:03.390 "hdgst": false, 00:28:03.390 "ddgst": false 00:28:03.390 }, 00:28:03.390 "method": "bdev_nvme_attach_controller" 00:28:03.390 },{ 00:28:03.390 "params": { 00:28:03.390 "name": "Nvme6", 00:28:03.390 "trtype": "tcp", 00:28:03.390 "traddr": "10.0.0.2", 00:28:03.390 "adrfam": "ipv4", 00:28:03.390 "trsvcid": "4420", 00:28:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:03.390 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:03.390 "hdgst": false, 00:28:03.390 "ddgst": false 00:28:03.390 }, 00:28:03.390 "method": "bdev_nvme_attach_controller" 00:28:03.390 },{ 00:28:03.390 "params": { 00:28:03.390 "name": "Nvme7", 00:28:03.390 "trtype": "tcp", 00:28:03.390 "traddr": "10.0.0.2", 00:28:03.390 "adrfam": "ipv4", 00:28:03.390 "trsvcid": "4420", 00:28:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:03.390 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:03.390 "hdgst": false, 00:28:03.390 "ddgst": false 00:28:03.390 }, 00:28:03.390 "method": "bdev_nvme_attach_controller" 00:28:03.390 },{ 00:28:03.390 "params": { 00:28:03.390 "name": "Nvme8", 00:28:03.390 "trtype": "tcp", 00:28:03.390 "traddr": "10.0.0.2", 00:28:03.390 "adrfam": "ipv4", 00:28:03.390 "trsvcid": "4420", 00:28:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:03.390 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:03.390 "hdgst": false, 00:28:03.390 "ddgst": false 00:28:03.390 }, 00:28:03.390 "method": "bdev_nvme_attach_controller" 00:28:03.390 },{ 00:28:03.390 "params": { 00:28:03.390 "name": "Nvme9", 00:28:03.390 "trtype": "tcp", 00:28:03.390 "traddr": "10.0.0.2", 00:28:03.390 "adrfam": "ipv4", 00:28:03.390 "trsvcid": "4420", 00:28:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:03.390 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:03.390 "hdgst": false, 00:28:03.390 "ddgst": false 00:28:03.390 }, 00:28:03.390 "method": "bdev_nvme_attach_controller" 00:28:03.390 },{ 00:28:03.390 "params": { 00:28:03.390 "name": "Nvme10", 00:28:03.390 "trtype": "tcp", 00:28:03.390 "traddr": "10.0.0.2", 00:28:03.390 "adrfam": "ipv4", 00:28:03.390 "trsvcid": "4420", 00:28:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:03.390 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:03.390 "hdgst": false, 00:28:03.390 "ddgst": false 00:28:03.390 }, 00:28:03.390 "method": "bdev_nvme_attach_controller" 00:28:03.390 }' 00:28:03.390 [2024-11-20 08:25:07.983331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.391 [2024-11-20 08:25:08.019599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.772 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:04.772 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:04.772 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:04.772 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.772 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:04.772 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.772 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2074524 00:28:04.772 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:04.772 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:05.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2074524 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2074149 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:05.715 { 00:28:05.715 "params": { 00:28:05.715 "name": "Nvme$subsystem", 00:28:05.715 "trtype": "$TEST_TRANSPORT", 00:28:05.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.715 "adrfam": "ipv4", 00:28:05.715 "trsvcid": "$NVMF_PORT", 00:28:05.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.715 "hdgst": ${hdgst:-false}, 00:28:05.715 "ddgst": ${ddgst:-false} 00:28:05.715 }, 00:28:05.715 "method": "bdev_nvme_attach_controller" 00:28:05.715 } 00:28:05.715 EOF 00:28:05.715 )") 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:05.715 { 00:28:05.715 "params": { 00:28:05.715 "name": "Nvme$subsystem", 00:28:05.715 "trtype": "$TEST_TRANSPORT", 00:28:05.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.715 "adrfam": "ipv4", 00:28:05.715 "trsvcid": "$NVMF_PORT", 00:28:05.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.715 "hdgst": ${hdgst:-false}, 00:28:05.715 "ddgst": ${ddgst:-false} 00:28:05.715 }, 00:28:05.715 "method": "bdev_nvme_attach_controller" 00:28:05.715 } 00:28:05.715 EOF 00:28:05.715 )") 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:05.715 { 00:28:05.715 "params": { 00:28:05.715 "name": "Nvme$subsystem", 00:28:05.715 "trtype": "$TEST_TRANSPORT", 00:28:05.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.715 "adrfam": "ipv4", 00:28:05.715 "trsvcid": "$NVMF_PORT", 00:28:05.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.715 "hdgst": ${hdgst:-false}, 00:28:05.715 "ddgst": ${ddgst:-false} 00:28:05.715 }, 00:28:05.715 "method": "bdev_nvme_attach_controller" 00:28:05.715 } 00:28:05.715 EOF 00:28:05.715 )") 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:05.715 { 00:28:05.715 "params": { 00:28:05.715 "name": "Nvme$subsystem", 00:28:05.715 "trtype": "$TEST_TRANSPORT", 00:28:05.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.715 "adrfam": "ipv4", 00:28:05.715 "trsvcid": "$NVMF_PORT", 00:28:05.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.715 "hdgst": ${hdgst:-false}, 00:28:05.715 "ddgst": ${ddgst:-false} 00:28:05.715 }, 00:28:05.715 "method": "bdev_nvme_attach_controller" 00:28:05.715 } 00:28:05.715 EOF 00:28:05.715 )") 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:05.715 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:05.716 { 00:28:05.716 "params": { 00:28:05.716 "name": "Nvme$subsystem", 00:28:05.716 "trtype": "$TEST_TRANSPORT", 00:28:05.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.716 "adrfam": "ipv4", 00:28:05.716 "trsvcid": "$NVMF_PORT", 00:28:05.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.716 "hdgst": ${hdgst:-false}, 00:28:05.716 "ddgst": ${ddgst:-false} 00:28:05.716 }, 00:28:05.716 "method": "bdev_nvme_attach_controller" 00:28:05.716 } 00:28:05.716 EOF 00:28:05.716 )") 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:05.716 { 00:28:05.716 "params": { 00:28:05.716 "name": "Nvme$subsystem", 00:28:05.716 "trtype": "$TEST_TRANSPORT", 00:28:05.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.716 "adrfam": "ipv4", 00:28:05.716 "trsvcid": "$NVMF_PORT", 00:28:05.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.716 "hdgst": ${hdgst:-false}, 00:28:05.716 "ddgst": ${ddgst:-false} 00:28:05.716 }, 00:28:05.716 "method": "bdev_nvme_attach_controller" 00:28:05.716 } 00:28:05.716 EOF 00:28:05.716 )") 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:05.716 [2024-11-20 08:25:10.272845] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:28:05.716 [2024-11-20 08:25:10.272907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074900 ] 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:05.716 { 00:28:05.716 "params": { 00:28:05.716 "name": "Nvme$subsystem", 00:28:05.716 "trtype": "$TEST_TRANSPORT", 00:28:05.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.716 "adrfam": "ipv4", 00:28:05.716 "trsvcid": "$NVMF_PORT", 00:28:05.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.716 "hdgst": ${hdgst:-false}, 00:28:05.716 "ddgst": ${ddgst:-false} 00:28:05.716 }, 00:28:05.716 "method": "bdev_nvme_attach_controller" 00:28:05.716 } 00:28:05.716 EOF 00:28:05.716 )") 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:05.716 { 00:28:05.716 "params": { 00:28:05.716 "name": "Nvme$subsystem", 00:28:05.716 "trtype": "$TEST_TRANSPORT", 00:28:05.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.716 "adrfam": "ipv4", 00:28:05.716 "trsvcid": "$NVMF_PORT", 00:28:05.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.716 "hdgst": ${hdgst:-false}, 00:28:05.716 "ddgst": ${ddgst:-false} 00:28:05.716 }, 00:28:05.716 "method": "bdev_nvme_attach_controller" 00:28:05.716 } 00:28:05.716 EOF 00:28:05.716 )") 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:05.716 { 00:28:05.716 "params": { 00:28:05.716 "name": "Nvme$subsystem", 00:28:05.716 "trtype": "$TEST_TRANSPORT", 00:28:05.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.716 "adrfam": "ipv4", 00:28:05.716 "trsvcid": "$NVMF_PORT", 00:28:05.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.716 "hdgst": ${hdgst:-false}, 00:28:05.716 "ddgst": ${ddgst:-false} 00:28:05.716 }, 00:28:05.716 "method": "bdev_nvme_attach_controller" 00:28:05.716 } 00:28:05.716 EOF 00:28:05.716 )") 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:05.716 { 00:28:05.716 "params": { 00:28:05.716 "name": "Nvme$subsystem", 00:28:05.716 "trtype": "$TEST_TRANSPORT", 00:28:05.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.716 "adrfam": "ipv4", 00:28:05.716 "trsvcid": "$NVMF_PORT", 00:28:05.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.716 "hdgst": ${hdgst:-false}, 00:28:05.716 "ddgst": ${ddgst:-false} 00:28:05.716 }, 00:28:05.716 "method": "bdev_nvme_attach_controller" 00:28:05.716 } 00:28:05.716 EOF 00:28:05.716 )") 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:28:05.716 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:28:05.716 "params": { 00:28:05.716 "name": "Nvme1", 00:28:05.716 "trtype": "tcp", 00:28:05.716 "traddr": "10.0.0.2", 00:28:05.716 "adrfam": "ipv4", 00:28:05.716 "trsvcid": "4420", 00:28:05.716 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.716 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:05.716 "hdgst": false, 00:28:05.716 "ddgst": false 00:28:05.716 }, 00:28:05.716 "method": "bdev_nvme_attach_controller" 00:28:05.716 },{ 00:28:05.716 "params": { 00:28:05.716 "name": "Nvme2", 00:28:05.716 "trtype": "tcp", 00:28:05.716 "traddr": "10.0.0.2", 00:28:05.716 "adrfam": "ipv4", 00:28:05.716 "trsvcid": "4420", 00:28:05.716 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:05.716 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:05.716 "hdgst": false, 00:28:05.716 "ddgst": false 00:28:05.716 }, 00:28:05.716 "method": "bdev_nvme_attach_controller" 00:28:05.716 },{ 00:28:05.716 "params": { 00:28:05.716 "name": "Nvme3", 00:28:05.716 "trtype": "tcp", 00:28:05.716 "traddr": "10.0.0.2", 00:28:05.716 "adrfam": "ipv4", 00:28:05.716 "trsvcid": "4420", 00:28:05.716 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:05.716 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:05.716 "hdgst": false, 00:28:05.716 "ddgst": false 00:28:05.716 }, 00:28:05.716 "method": "bdev_nvme_attach_controller" 00:28:05.716 },{ 00:28:05.716 "params": { 00:28:05.716 "name": "Nvme4", 00:28:05.716 "trtype": "tcp", 00:28:05.716 "traddr": "10.0.0.2", 00:28:05.716 "adrfam": "ipv4", 00:28:05.716 "trsvcid": "4420", 00:28:05.716 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:05.716 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:05.716 "hdgst": false, 00:28:05.716 "ddgst": false 00:28:05.716 }, 00:28:05.716 "method": "bdev_nvme_attach_controller" 00:28:05.716 },{ 00:28:05.716 "params": { 00:28:05.716 "name": "Nvme5", 00:28:05.716 "trtype": "tcp", 00:28:05.716 "traddr": "10.0.0.2", 00:28:05.716 "adrfam": "ipv4", 00:28:05.716 "trsvcid": "4420", 00:28:05.716 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:05.716 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:05.716 "hdgst": false, 00:28:05.716 "ddgst": false 00:28:05.717 }, 00:28:05.717 "method": "bdev_nvme_attach_controller" 00:28:05.717 },{ 00:28:05.717 "params": { 00:28:05.717 "name": "Nvme6", 00:28:05.717 "trtype": "tcp", 00:28:05.717 "traddr": "10.0.0.2", 00:28:05.717 "adrfam": "ipv4", 00:28:05.717 "trsvcid": "4420", 00:28:05.717 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:05.717 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:05.717 "hdgst": false, 00:28:05.717 "ddgst": false 00:28:05.717 }, 00:28:05.717 "method": "bdev_nvme_attach_controller" 00:28:05.717 },{ 00:28:05.717 "params": { 00:28:05.717 "name": "Nvme7", 00:28:05.717 "trtype": "tcp", 00:28:05.717 "traddr": "10.0.0.2", 00:28:05.717 "adrfam": "ipv4", 00:28:05.717 "trsvcid": "4420", 00:28:05.717 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:05.717 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:05.717 "hdgst": false, 00:28:05.717 "ddgst": false 00:28:05.717 }, 00:28:05.717 "method": "bdev_nvme_attach_controller" 00:28:05.717 },{ 00:28:05.717 "params": { 00:28:05.717 "name": "Nvme8", 00:28:05.717 "trtype": "tcp", 00:28:05.717 "traddr": "10.0.0.2", 00:28:05.717 "adrfam": "ipv4", 00:28:05.717 "trsvcid": "4420", 00:28:05.717 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:05.717 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:05.717 "hdgst": false, 00:28:05.717 "ddgst": false 00:28:05.717 }, 00:28:05.717 "method": "bdev_nvme_attach_controller" 00:28:05.717 },{ 00:28:05.717 "params": { 00:28:05.717 "name": "Nvme9", 00:28:05.717 "trtype": "tcp", 00:28:05.717 "traddr": "10.0.0.2", 00:28:05.717 "adrfam": "ipv4", 00:28:05.717 "trsvcid": "4420", 00:28:05.717 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:05.717 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:05.717 "hdgst": false, 00:28:05.717 "ddgst": false 00:28:05.717 }, 00:28:05.717 "method": "bdev_nvme_attach_controller" 00:28:05.717 },{ 00:28:05.717 "params": { 00:28:05.717 "name": "Nvme10", 00:28:05.717 "trtype": "tcp", 00:28:05.717 "traddr": "10.0.0.2", 00:28:05.717 "adrfam": "ipv4", 00:28:05.717 "trsvcid": "4420", 00:28:05.717 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:05.717 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:05.717 "hdgst": false, 00:28:05.717 "ddgst": false 00:28:05.717 }, 00:28:05.717 "method": "bdev_nvme_attach_controller" 00:28:05.717 }' 00:28:05.717 [2024-11-20 08:25:10.352946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.717 [2024-11-20 08:25:10.389104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.101 Running I/O for 1 seconds... 00:28:08.303 1861.00 IOPS, 116.31 MiB/s 00:28:08.303 Latency(us) 00:28:08.303 [2024-11-20T07:25:13.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.303 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.303 Verification LBA range: start 0x0 length 0x400 00:28:08.303 Nvme1n1 : 1.10 232.30 14.52 0.00 0.00 272150.19 18131.63 253405.87 00:28:08.303 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.303 Verification LBA range: start 0x0 length 0x400 00:28:08.303 Nvme2n1 : 1.16 220.06 13.75 0.00 0.00 283332.48 24903.68 255153.49 00:28:08.303 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.303 Verification LBA range: start 0x0 length 0x400 00:28:08.303 Nvme3n1 : 1.09 235.00 14.69 0.00 0.00 259568.21 16165.55 262144.00 00:28:08.303 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.303 Verification LBA range: start 0x0 length 0x400 00:28:08.303 Nvme4n1 : 1.10 232.59 14.54 0.00 0.00 258063.15 19005.44 262144.00 00:28:08.303 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.303 Verification LBA range: start 0x0 length 0x400 00:28:08.303 Nvme5n1 : 1.14 225.01 14.06 0.00 0.00 262656.21 16165.55 248162.99 00:28:08.303 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.303 Verification LBA range: start 0x0 length 0x400 00:28:08.303 Nvme6n1 : 1.13 230.58 14.41 0.00 0.00 250505.87 4341.76 279620.27 00:28:08.303 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.303 Verification LBA range: start 0x0 length 0x400 00:28:08.303 Nvme7n1 : 1.18 271.53 16.97 0.00 0.00 210675.71 10758.83 249910.61 00:28:08.303 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.303 Verification LBA range: start 0x0 length 0x400 00:28:08.303 Nvme8n1 : 1.17 273.57 17.10 0.00 0.00 205071.36 14745.60 253405.87 00:28:08.303 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.303 Verification LBA range: start 0x0 length 0x400 00:28:08.303 Nvme9n1 : 1.18 270.53 16.91 0.00 0.00 203803.73 6635.52 255153.49 00:28:08.303 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.303 Verification LBA range: start 0x0 length 0x400 00:28:08.303 Nvme10n1 : 1.17 218.00 13.62 0.00 0.00 248338.35 18896.21 274377.39 00:28:08.303 [2024-11-20T07:25:13.032Z] =================================================================================================================== 00:28:08.303 [2024-11-20T07:25:13.032Z] Total : 2409.17 150.57 0.00 0.00 242716.75 4341.76 279620.27 00:28:08.303 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:08.303 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:08.303 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:08.303 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:08.303 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:08.303 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:08.304 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@99 -- # sync 00:28:08.304 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:08.304 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # set +e 00:28:08.304 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:08.304 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:08.304 rmmod nvme_tcp 00:28:08.304 rmmod nvme_fabrics 00:28:08.304 rmmod nvme_keyring 00:28:08.304 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:08.304 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # set -e 00:28:08.304 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # return 0 00:28:08.304 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # '[' -n 2074149 ']' 00:28:08.304 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@337 -- # killprocess 2074149 00:28:08.304 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2074149 ']' 00:28:08.304 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2074149 00:28:08.304 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:08.304 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:08.304 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2074149 00:28:08.563 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:08.563 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:08.563 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2074149' 00:28:08.563 killing process with pid 2074149 00:28:08.563 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2074149 00:28:08.563 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2074149 00:28:08.824 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:08.824 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # nvmf_fini 00:28:08.824 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@254 -- # local dev 00:28:08.824 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:08.824 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:08.824 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:08.824 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@121 -- # return 0 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # _dev=0 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # dev_map=() 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@274 -- # iptr 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@548 -- # iptables-save 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@548 -- # iptables-restore 00:28:10.739 00:28:10.739 real 0m17.400s 00:28:10.739 user 0m32.648s 00:28:10.739 sys 0m7.530s 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:10.739 ************************************ 00:28:10.739 END TEST nvmf_shutdown_tc1 00:28:10.739 ************************************ 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:10.739 ************************************ 00:28:10.739 START TEST nvmf_shutdown_tc2 00:28:10.739 ************************************ 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # remove_target_ns 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # xtrace_disable 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # pci_devs=() 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:10.739 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # net_devs=() 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # e810=() 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # local -ga e810 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # x722=() 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # local -ga x722 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # mlx=() 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # local -ga mlx 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:11.000 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:11.001 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:11.001 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:11.001 Found net devices under 0000:31:00.0: cvl_0_0 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:11.001 Found net devices under 0000:31:00.1: cvl_0_1 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # is_hw=yes 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@247 -- # create_target_ns 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@28 -- # local -g _dev 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # ips=() 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772161 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:11.001 10.0.0.1 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:11.001 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772162 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:11.002 10.0.0.2 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:11.002 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:11.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.509 ms 00:28:11.264 00:28:11.264 --- 10.0.0.1 ping statistics --- 00:28:11.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.264 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:11.264 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:11.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:28:11.265 00:28:11.265 --- 10.0.0.2 ping statistics --- 00:28:11.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.265 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # return 0 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # return 1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev= 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@160 -- # return 0 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=target1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # return 1 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev= 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@160 -- # return 0 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # nvmfpid=2076064 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # waitforlisten 2076064 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2076064 ']' 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:11.265 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.526 [2024-11-20 08:25:16.048004] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:28:11.526 [2024-11-20 08:25:16.048076] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.526 [2024-11-20 08:25:16.148835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:11.526 [2024-11-20 08:25:16.182975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.526 [2024-11-20 08:25:16.183008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.526 [2024-11-20 08:25:16.183014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.526 [2024-11-20 08:25:16.183022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.526 [2024-11-20 08:25:16.183027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.526 [2024-11-20 08:25:16.184371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.526 [2024-11-20 08:25:16.184533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:11.526 [2024-11-20 08:25:16.184691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.526 [2024-11-20 08:25:16.184694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.469 [2024-11-20 08:25:16.895370] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.469 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.469 Malloc1 00:28:12.469 [2024-11-20 08:25:17.004689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.469 Malloc2 00:28:12.469 Malloc3 00:28:12.469 Malloc4 00:28:12.469 Malloc5 00:28:12.469 Malloc6 00:28:12.731 Malloc7 00:28:12.731 Malloc8 00:28:12.731 Malloc9 00:28:12.731 Malloc10 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2076421 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2076421 /var/tmp/bdevperf.sock 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2076421 ']' 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:12.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # config=() 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # local subsystem config 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:12.731 { 00:28:12.731 "params": { 00:28:12.731 "name": "Nvme$subsystem", 00:28:12.731 "trtype": "$TEST_TRANSPORT", 00:28:12.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.731 "adrfam": "ipv4", 00:28:12.731 "trsvcid": "$NVMF_PORT", 00:28:12.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.731 "hdgst": ${hdgst:-false}, 00:28:12.731 "ddgst": ${ddgst:-false} 00:28:12.731 }, 00:28:12.731 "method": "bdev_nvme_attach_controller" 00:28:12.731 } 00:28:12.731 EOF 00:28:12.731 )") 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:12.731 { 00:28:12.731 "params": { 00:28:12.731 "name": "Nvme$subsystem", 00:28:12.731 "trtype": "$TEST_TRANSPORT", 00:28:12.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.731 "adrfam": "ipv4", 00:28:12.731 "trsvcid": "$NVMF_PORT", 00:28:12.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.731 "hdgst": ${hdgst:-false}, 00:28:12.731 "ddgst": ${ddgst:-false} 00:28:12.731 }, 00:28:12.731 "method": "bdev_nvme_attach_controller" 00:28:12.731 } 00:28:12.731 EOF 00:28:12.731 )") 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:12.731 { 00:28:12.731 "params": { 00:28:12.731 "name": "Nvme$subsystem", 00:28:12.731 "trtype": "$TEST_TRANSPORT", 00:28:12.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.731 "adrfam": "ipv4", 00:28:12.731 "trsvcid": "$NVMF_PORT", 00:28:12.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.731 "hdgst": ${hdgst:-false}, 00:28:12.731 "ddgst": ${ddgst:-false} 00:28:12.731 }, 00:28:12.731 "method": "bdev_nvme_attach_controller" 00:28:12.731 } 00:28:12.731 EOF 00:28:12.731 )") 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:12.731 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:12.731 { 00:28:12.731 "params": { 00:28:12.731 "name": "Nvme$subsystem", 00:28:12.731 "trtype": "$TEST_TRANSPORT", 00:28:12.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.731 "adrfam": "ipv4", 00:28:12.731 "trsvcid": "$NVMF_PORT", 00:28:12.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.732 "hdgst": ${hdgst:-false}, 00:28:12.732 "ddgst": ${ddgst:-false} 00:28:12.732 }, 00:28:12.732 "method": "bdev_nvme_attach_controller" 00:28:12.732 } 00:28:12.732 EOF 00:28:12.732 )") 00:28:12.732 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:12.732 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:12.732 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:12.732 { 00:28:12.732 "params": { 00:28:12.732 "name": "Nvme$subsystem", 00:28:12.732 "trtype": "$TEST_TRANSPORT", 00:28:12.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.732 "adrfam": "ipv4", 00:28:12.732 "trsvcid": "$NVMF_PORT", 00:28:12.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.732 "hdgst": ${hdgst:-false}, 00:28:12.732 "ddgst": ${ddgst:-false} 00:28:12.732 }, 00:28:12.732 "method": "bdev_nvme_attach_controller" 00:28:12.732 } 00:28:12.732 EOF 00:28:12.732 )") 00:28:12.732 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:12.732 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:12.732 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:12.732 { 00:28:12.732 "params": { 00:28:12.732 "name": "Nvme$subsystem", 00:28:12.732 "trtype": "$TEST_TRANSPORT", 00:28:12.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.732 "adrfam": "ipv4", 00:28:12.732 "trsvcid": "$NVMF_PORT", 00:28:12.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.732 "hdgst": ${hdgst:-false}, 00:28:12.732 "ddgst": ${ddgst:-false} 00:28:12.732 }, 00:28:12.732 "method": "bdev_nvme_attach_controller" 00:28:12.732 } 00:28:12.732 EOF 00:28:12.732 )") 00:28:12.732 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:12.732 [2024-11-20 08:25:17.449490] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:28:12.732 [2024-11-20 08:25:17.449545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2076421 ] 00:28:12.732 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:12.732 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:12.732 { 00:28:12.732 "params": { 00:28:12.732 "name": "Nvme$subsystem", 00:28:12.732 "trtype": "$TEST_TRANSPORT", 00:28:12.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.732 "adrfam": "ipv4", 00:28:12.732 "trsvcid": "$NVMF_PORT", 00:28:12.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.732 "hdgst": ${hdgst:-false}, 00:28:12.732 "ddgst": ${ddgst:-false} 00:28:12.732 }, 00:28:12.732 "method": "bdev_nvme_attach_controller" 00:28:12.732 } 00:28:12.732 EOF 00:28:12.732 )") 00:28:12.732 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:12.994 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:12.994 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:12.994 { 00:28:12.994 "params": { 00:28:12.994 "name": "Nvme$subsystem", 00:28:12.994 "trtype": "$TEST_TRANSPORT", 00:28:12.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.994 "adrfam": "ipv4", 00:28:12.994 "trsvcid": "$NVMF_PORT", 00:28:12.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.994 "hdgst": ${hdgst:-false}, 00:28:12.994 "ddgst": ${ddgst:-false} 00:28:12.994 }, 00:28:12.994 "method": "bdev_nvme_attach_controller" 00:28:12.994 } 00:28:12.994 EOF 00:28:12.994 )") 00:28:12.994 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:12.994 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:12.994 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:12.994 { 00:28:12.994 "params": { 00:28:12.994 "name": "Nvme$subsystem", 00:28:12.994 "trtype": "$TEST_TRANSPORT", 00:28:12.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.994 "adrfam": "ipv4", 00:28:12.994 "trsvcid": "$NVMF_PORT", 00:28:12.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.994 "hdgst": ${hdgst:-false}, 00:28:12.994 "ddgst": ${ddgst:-false} 00:28:12.995 }, 00:28:12.995 "method": "bdev_nvme_attach_controller" 00:28:12.995 } 00:28:12.995 EOF 00:28:12.995 )") 00:28:12.995 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:12.995 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:12.995 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:12.995 { 00:28:12.995 "params": { 00:28:12.995 "name": "Nvme$subsystem", 00:28:12.995 "trtype": "$TEST_TRANSPORT", 00:28:12.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.995 "adrfam": "ipv4", 00:28:12.995 "trsvcid": "$NVMF_PORT", 00:28:12.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.995 "hdgst": ${hdgst:-false}, 00:28:12.995 "ddgst": ${ddgst:-false} 00:28:12.995 }, 00:28:12.995 "method": "bdev_nvme_attach_controller" 00:28:12.995 } 00:28:12.995 EOF 00:28:12.995 )") 00:28:12.995 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:12.995 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # jq . 00:28:12.995 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@397 -- # IFS=, 00:28:12.995 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:28:12.995 "params": { 00:28:12.995 "name": "Nvme1", 00:28:12.995 "trtype": "tcp", 00:28:12.995 "traddr": "10.0.0.2", 00:28:12.995 "adrfam": "ipv4", 00:28:12.995 "trsvcid": "4420", 00:28:12.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:12.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:12.995 "hdgst": false, 00:28:12.995 "ddgst": false 00:28:12.995 }, 00:28:12.995 "method": "bdev_nvme_attach_controller" 00:28:12.995 },{ 00:28:12.995 "params": { 00:28:12.995 "name": "Nvme2", 00:28:12.995 "trtype": "tcp", 00:28:12.995 "traddr": "10.0.0.2", 00:28:12.995 "adrfam": "ipv4", 00:28:12.995 "trsvcid": "4420", 00:28:12.995 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:12.995 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:12.995 "hdgst": false, 00:28:12.995 "ddgst": false 00:28:12.995 }, 00:28:12.995 "method": "bdev_nvme_attach_controller" 00:28:12.995 },{ 00:28:12.995 "params": { 00:28:12.995 "name": "Nvme3", 00:28:12.995 "trtype": "tcp", 00:28:12.995 "traddr": "10.0.0.2", 00:28:12.995 "adrfam": "ipv4", 00:28:12.995 "trsvcid": "4420", 00:28:12.995 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:12.995 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:12.995 "hdgst": false, 00:28:12.995 "ddgst": false 00:28:12.995 }, 00:28:12.995 "method": "bdev_nvme_attach_controller" 00:28:12.995 },{ 00:28:12.995 "params": { 00:28:12.995 "name": "Nvme4", 00:28:12.995 "trtype": "tcp", 00:28:12.995 "traddr": "10.0.0.2", 00:28:12.995 "adrfam": "ipv4", 00:28:12.995 "trsvcid": "4420", 00:28:12.995 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:12.995 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:12.995 "hdgst": false, 00:28:12.995 "ddgst": false 00:28:12.995 }, 00:28:12.995 "method": "bdev_nvme_attach_controller" 00:28:12.995 },{ 00:28:12.995 "params": { 00:28:12.995 "name": "Nvme5", 00:28:12.995 "trtype": "tcp", 00:28:12.995 "traddr": "10.0.0.2", 00:28:12.995 "adrfam": "ipv4", 00:28:12.995 "trsvcid": "4420", 00:28:12.995 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:12.995 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:12.995 "hdgst": false, 00:28:12.995 "ddgst": false 00:28:12.995 }, 00:28:12.995 "method": "bdev_nvme_attach_controller" 00:28:12.995 },{ 00:28:12.995 "params": { 00:28:12.995 "name": "Nvme6", 00:28:12.995 "trtype": "tcp", 00:28:12.995 "traddr": "10.0.0.2", 00:28:12.995 "adrfam": "ipv4", 00:28:12.995 "trsvcid": "4420", 00:28:12.995 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:12.995 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:12.995 "hdgst": false, 00:28:12.995 "ddgst": false 00:28:12.995 }, 00:28:12.995 "method": "bdev_nvme_attach_controller" 00:28:12.995 },{ 00:28:12.995 "params": { 00:28:12.995 "name": "Nvme7", 00:28:12.995 "trtype": "tcp", 00:28:12.995 "traddr": "10.0.0.2", 00:28:12.995 "adrfam": "ipv4", 00:28:12.995 "trsvcid": "4420", 00:28:12.995 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:12.995 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:12.995 "hdgst": false, 00:28:12.995 "ddgst": false 00:28:12.995 }, 00:28:12.995 "method": "bdev_nvme_attach_controller" 00:28:12.995 },{ 00:28:12.995 "params": { 00:28:12.995 "name": "Nvme8", 00:28:12.995 "trtype": "tcp", 00:28:12.995 "traddr": "10.0.0.2", 00:28:12.995 "adrfam": "ipv4", 00:28:12.995 "trsvcid": "4420", 00:28:12.995 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:12.995 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:12.995 "hdgst": false, 00:28:12.995 "ddgst": false 00:28:12.995 }, 00:28:12.995 "method": "bdev_nvme_attach_controller" 00:28:12.995 },{ 00:28:12.995 "params": { 00:28:12.995 "name": "Nvme9", 00:28:12.995 "trtype": "tcp", 00:28:12.995 "traddr": "10.0.0.2", 00:28:12.995 "adrfam": "ipv4", 00:28:12.995 "trsvcid": "4420", 00:28:12.995 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:12.995 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:12.995 "hdgst": false, 00:28:12.995 "ddgst": false 00:28:12.995 }, 00:28:12.995 "method": "bdev_nvme_attach_controller" 00:28:12.995 },{ 00:28:12.995 "params": { 00:28:12.995 "name": "Nvme10", 00:28:12.995 "trtype": "tcp", 00:28:12.995 "traddr": "10.0.0.2", 00:28:12.995 "adrfam": "ipv4", 00:28:12.995 "trsvcid": "4420", 00:28:12.995 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:12.995 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:12.995 "hdgst": false, 00:28:12.995 "ddgst": false 00:28:12.995 }, 00:28:12.995 "method": "bdev_nvme_attach_controller" 00:28:12.995 }' 00:28:12.995 [2024-11-20 08:25:17.528590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.995 [2024-11-20 08:25:17.564736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.381 Running I/O for 10 seconds... 00:28:14.381 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.381 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:14.381 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:14.381 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.381 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:14.381 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.381 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:14.381 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:14.381 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:14.381 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:14.381 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:14.381 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:14.381 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:14.381 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:14.381 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:14.381 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.381 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:14.381 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.644 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:14.644 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:14.644 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:14.911 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:14.911 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:14.911 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:14.911 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:14.911 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.911 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:14.911 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.911 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:14.911 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:14.911 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2076421 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2076421 ']' 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2076421 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2076421 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2076421' 00:28:15.172 killing process with pid 2076421 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2076421 00:28:15.172 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2076421 00:28:15.172 Received shutdown signal, test time was about 0.991722 seconds 00:28:15.172 00:28:15.173 Latency(us) 00:28:15.173 [2024-11-20T07:25:19.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.173 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.173 Verification LBA range: start 0x0 length 0x400 00:28:15.173 Nvme1n1 : 0.99 259.36 16.21 0.00 0.00 243866.88 17803.95 251658.24 00:28:15.173 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.173 Verification LBA range: start 0x0 length 0x400 00:28:15.173 Nvme2n1 : 0.98 264.94 16.56 0.00 0.00 232951.91 4287.15 223696.21 00:28:15.173 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.173 Verification LBA range: start 0x0 length 0x400 00:28:15.173 Nvme3n1 : 0.97 267.33 16.71 0.00 0.00 226610.99 4860.59 258648.75 00:28:15.173 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.173 Verification LBA range: start 0x0 length 0x400 00:28:15.173 Nvme4n1 : 0.99 258.37 16.15 0.00 0.00 230238.93 14199.47 258648.75 00:28:15.173 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.173 Verification LBA range: start 0x0 length 0x400 00:28:15.173 Nvme5n1 : 0.96 200.01 12.50 0.00 0.00 290504.53 16602.45 253405.87 00:28:15.173 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.173 Verification LBA range: start 0x0 length 0x400 00:28:15.173 Nvme6n1 : 0.96 201.01 12.56 0.00 0.00 282582.47 30583.47 237677.23 00:28:15.173 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.173 Verification LBA range: start 0x0 length 0x400 00:28:15.173 Nvme7n1 : 0.98 264.64 16.54 0.00 0.00 209595.37 4041.39 246415.36 00:28:15.173 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.173 Verification LBA range: start 0x0 length 0x400 00:28:15.173 Nvme8n1 : 0.98 261.33 16.33 0.00 0.00 208106.24 13216.43 251658.24 00:28:15.173 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.173 Verification LBA range: start 0x0 length 0x400 00:28:15.173 Nvme9n1 : 0.97 198.13 12.38 0.00 0.00 267833.46 38010.88 256901.12 00:28:15.173 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.173 Verification LBA range: start 0x0 length 0x400 00:28:15.173 Nvme10n1 : 0.97 198.80 12.43 0.00 0.00 260230.26 17257.81 267386.88 00:28:15.173 [2024-11-20T07:25:19.902Z] =================================================================================================================== 00:28:15.173 [2024-11-20T07:25:19.902Z] Total : 2373.92 148.37 0.00 0.00 241817.09 4041.39 267386.88 00:28:15.433 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2076064 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@99 -- # sync 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # set +e 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:16.375 rmmod nvme_tcp 00:28:16.375 rmmod nvme_fabrics 00:28:16.375 rmmod nvme_keyring 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # set -e 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # return 0 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # '[' -n 2076064 ']' 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@337 -- # killprocess 2076064 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2076064 ']' 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2076064 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.375 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2076064 00:28:16.636 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:16.636 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:16.636 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2076064' 00:28:16.636 killing process with pid 2076064 00:28:16.636 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2076064 00:28:16.636 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2076064 00:28:16.899 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:16.899 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # nvmf_fini 00:28:16.899 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@254 -- # local dev 00:28:16.899 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:16.899 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:16.899 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:16.899 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@121 -- # return 0 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # _dev=0 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # dev_map=() 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@274 -- # iptr 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@548 -- # iptables-save 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@548 -- # iptables-restore 00:28:18.814 00:28:18.814 real 0m8.008s 00:28:18.814 user 0m23.653s 00:28:18.814 sys 0m1.370s 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.814 ************************************ 00:28:18.814 END TEST nvmf_shutdown_tc2 00:28:18.814 ************************************ 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:18.814 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:19.077 ************************************ 00:28:19.077 START TEST nvmf_shutdown_tc3 00:28:19.077 ************************************ 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # remove_target_ns 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # xtrace_disable 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # pci_devs=() 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # net_devs=() 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # e810=() 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # local -ga e810 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # x722=() 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # local -ga x722 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # mlx=() 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # local -ga mlx 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:19.077 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:19.077 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:19.077 Found net devices under 0000:31:00.0: cvl_0_0 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:19.077 Found net devices under 0000:31:00.1: cvl_0_1 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # is_hw=yes 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:19.077 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@247 -- # create_target_ns 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@28 -- # local -g _dev 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # ips=() 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772161 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:19.078 10.0.0.1 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772162 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:19.078 10.0.0.2 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:19.078 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:19.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.434 ms 00:28:19.341 00:28:19.341 --- 10.0.0.1 ping statistics --- 00:28:19.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.341 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:19.341 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:19.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:28:19.342 00:28:19.342 --- 10.0.0.2 ping statistics --- 00:28:19.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.342 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # return 0 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # return 1 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev= 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@160 -- # return 0 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:19.342 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=target1 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # return 1 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev= 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@160 -- # return 0 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # nvmfpid=2077902 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # waitforlisten 2077902 00:28:19.342 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:19.343 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2077902 ']' 00:28:19.343 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.343 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.343 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.343 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.343 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.603 [2024-11-20 08:25:24.125322] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:28:19.603 [2024-11-20 08:25:24.125388] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.603 [2024-11-20 08:25:24.228376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:19.603 [2024-11-20 08:25:24.261968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.603 [2024-11-20 08:25:24.262000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.603 [2024-11-20 08:25:24.262006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.603 [2024-11-20 08:25:24.262011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.604 [2024-11-20 08:25:24.262015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.604 [2024-11-20 08:25:24.263318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.604 [2024-11-20 08:25:24.263481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.604 [2024-11-20 08:25:24.263635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.604 [2024-11-20 08:25:24.263637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:20.546 [2024-11-20 08:25:24.962065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:20.546 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:20.546 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:20.546 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:20.546 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:20.546 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:20.546 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:20.546 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:20.546 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:20.546 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:20.546 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:20.546 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:20.546 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:20.546 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.546 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:20.546 Malloc1 00:28:20.546 [2024-11-20 08:25:25.074527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.546 Malloc2 00:28:20.546 Malloc3 00:28:20.546 Malloc4 00:28:20.546 Malloc5 00:28:20.546 Malloc6 00:28:20.807 Malloc7 00:28:20.807 Malloc8 00:28:20.807 Malloc9 00:28:20.807 Malloc10 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2078272 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2078272 /var/tmp/bdevperf.sock 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2078272 ']' 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:20.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # config=() 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # local subsystem config 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:20.807 { 00:28:20.807 "params": { 00:28:20.807 "name": "Nvme$subsystem", 00:28:20.807 "trtype": "$TEST_TRANSPORT", 00:28:20.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.807 "adrfam": "ipv4", 00:28:20.807 "trsvcid": "$NVMF_PORT", 00:28:20.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.807 "hdgst": ${hdgst:-false}, 00:28:20.807 "ddgst": ${ddgst:-false} 00:28:20.807 }, 00:28:20.807 "method": "bdev_nvme_attach_controller" 00:28:20.807 } 00:28:20.807 EOF 00:28:20.807 )") 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:20.807 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:20.807 { 00:28:20.807 "params": { 00:28:20.808 "name": "Nvme$subsystem", 00:28:20.808 "trtype": "$TEST_TRANSPORT", 00:28:20.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.808 "adrfam": "ipv4", 00:28:20.808 "trsvcid": "$NVMF_PORT", 00:28:20.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.808 "hdgst": ${hdgst:-false}, 00:28:20.808 "ddgst": ${ddgst:-false} 00:28:20.808 }, 00:28:20.808 "method": "bdev_nvme_attach_controller" 00:28:20.808 } 00:28:20.808 EOF 00:28:20.808 )") 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:20.808 { 00:28:20.808 "params": { 00:28:20.808 "name": "Nvme$subsystem", 00:28:20.808 "trtype": "$TEST_TRANSPORT", 00:28:20.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.808 "adrfam": "ipv4", 00:28:20.808 "trsvcid": "$NVMF_PORT", 00:28:20.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.808 "hdgst": ${hdgst:-false}, 00:28:20.808 "ddgst": ${ddgst:-false} 00:28:20.808 }, 00:28:20.808 "method": "bdev_nvme_attach_controller" 00:28:20.808 } 00:28:20.808 EOF 00:28:20.808 )") 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:20.808 { 00:28:20.808 "params": { 00:28:20.808 "name": "Nvme$subsystem", 00:28:20.808 "trtype": "$TEST_TRANSPORT", 00:28:20.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.808 "adrfam": "ipv4", 00:28:20.808 "trsvcid": "$NVMF_PORT", 00:28:20.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.808 "hdgst": ${hdgst:-false}, 00:28:20.808 "ddgst": ${ddgst:-false} 00:28:20.808 }, 00:28:20.808 "method": "bdev_nvme_attach_controller" 00:28:20.808 } 00:28:20.808 EOF 00:28:20.808 )") 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:20.808 { 00:28:20.808 "params": { 00:28:20.808 "name": "Nvme$subsystem", 00:28:20.808 "trtype": "$TEST_TRANSPORT", 00:28:20.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.808 "adrfam": "ipv4", 00:28:20.808 "trsvcid": "$NVMF_PORT", 00:28:20.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.808 "hdgst": ${hdgst:-false}, 00:28:20.808 "ddgst": ${ddgst:-false} 00:28:20.808 }, 00:28:20.808 "method": "bdev_nvme_attach_controller" 00:28:20.808 } 00:28:20.808 EOF 00:28:20.808 )") 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:20.808 { 00:28:20.808 "params": { 00:28:20.808 "name": "Nvme$subsystem", 00:28:20.808 "trtype": "$TEST_TRANSPORT", 00:28:20.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.808 "adrfam": "ipv4", 00:28:20.808 "trsvcid": "$NVMF_PORT", 00:28:20.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.808 "hdgst": ${hdgst:-false}, 00:28:20.808 "ddgst": ${ddgst:-false} 00:28:20.808 }, 00:28:20.808 "method": "bdev_nvme_attach_controller" 00:28:20.808 } 00:28:20.808 EOF 00:28:20.808 )") 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:20.808 [2024-11-20 08:25:25.522100] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:28:20.808 [2024-11-20 08:25:25.522154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2078272 ] 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:20.808 { 00:28:20.808 "params": { 00:28:20.808 "name": "Nvme$subsystem", 00:28:20.808 "trtype": "$TEST_TRANSPORT", 00:28:20.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.808 "adrfam": "ipv4", 00:28:20.808 "trsvcid": "$NVMF_PORT", 00:28:20.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.808 "hdgst": ${hdgst:-false}, 00:28:20.808 "ddgst": ${ddgst:-false} 00:28:20.808 }, 00:28:20.808 "method": "bdev_nvme_attach_controller" 00:28:20.808 } 00:28:20.808 EOF 00:28:20.808 )") 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:20.808 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:20.808 { 00:28:20.808 "params": { 00:28:20.808 "name": "Nvme$subsystem", 00:28:20.808 "trtype": "$TEST_TRANSPORT", 00:28:20.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.808 "adrfam": "ipv4", 00:28:20.808 "trsvcid": "$NVMF_PORT", 00:28:20.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.808 "hdgst": ${hdgst:-false}, 00:28:20.808 "ddgst": ${ddgst:-false} 00:28:20.808 }, 00:28:20.808 "method": "bdev_nvme_attach_controller" 00:28:20.808 } 00:28:20.808 EOF 00:28:20.808 )") 00:28:21.069 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:21.069 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:21.069 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:21.069 { 00:28:21.069 "params": { 00:28:21.069 "name": "Nvme$subsystem", 00:28:21.069 "trtype": "$TEST_TRANSPORT", 00:28:21.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.069 "adrfam": "ipv4", 00:28:21.069 "trsvcid": "$NVMF_PORT", 00:28:21.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.069 "hdgst": ${hdgst:-false}, 00:28:21.069 "ddgst": ${ddgst:-false} 00:28:21.069 }, 00:28:21.069 "method": "bdev_nvme_attach_controller" 00:28:21.069 } 00:28:21.069 EOF 00:28:21.069 )") 00:28:21.069 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:21.069 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:21.069 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:21.069 { 00:28:21.069 "params": { 00:28:21.069 "name": "Nvme$subsystem", 00:28:21.069 "trtype": "$TEST_TRANSPORT", 00:28:21.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.069 "adrfam": "ipv4", 00:28:21.069 "trsvcid": "$NVMF_PORT", 00:28:21.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.069 "hdgst": ${hdgst:-false}, 00:28:21.069 "ddgst": ${ddgst:-false} 00:28:21.069 }, 00:28:21.069 "method": "bdev_nvme_attach_controller" 00:28:21.069 } 00:28:21.069 EOF 00:28:21.069 )") 00:28:21.069 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:21.069 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # jq . 00:28:21.069 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@397 -- # IFS=, 00:28:21.069 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:28:21.069 "params": { 00:28:21.069 "name": "Nvme1", 00:28:21.069 "trtype": "tcp", 00:28:21.069 "traddr": "10.0.0.2", 00:28:21.069 "adrfam": "ipv4", 00:28:21.069 "trsvcid": "4420", 00:28:21.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:21.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:21.069 "hdgst": false, 00:28:21.069 "ddgst": false 00:28:21.069 }, 00:28:21.069 "method": "bdev_nvme_attach_controller" 00:28:21.069 },{ 00:28:21.069 "params": { 00:28:21.069 "name": "Nvme2", 00:28:21.069 "trtype": "tcp", 00:28:21.069 "traddr": "10.0.0.2", 00:28:21.069 "adrfam": "ipv4", 00:28:21.069 "trsvcid": "4420", 00:28:21.069 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:21.069 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:21.069 "hdgst": false, 00:28:21.069 "ddgst": false 00:28:21.069 }, 00:28:21.069 "method": "bdev_nvme_attach_controller" 00:28:21.069 },{ 00:28:21.069 "params": { 00:28:21.069 "name": "Nvme3", 00:28:21.069 "trtype": "tcp", 00:28:21.069 "traddr": "10.0.0.2", 00:28:21.069 "adrfam": "ipv4", 00:28:21.069 "trsvcid": "4420", 00:28:21.069 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:21.069 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:21.069 "hdgst": false, 00:28:21.069 "ddgst": false 00:28:21.069 }, 00:28:21.069 "method": "bdev_nvme_attach_controller" 00:28:21.069 },{ 00:28:21.069 "params": { 00:28:21.069 "name": "Nvme4", 00:28:21.069 "trtype": "tcp", 00:28:21.069 "traddr": "10.0.0.2", 00:28:21.069 "adrfam": "ipv4", 00:28:21.069 "trsvcid": "4420", 00:28:21.069 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:21.069 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:21.069 "hdgst": false, 00:28:21.069 "ddgst": false 00:28:21.069 }, 00:28:21.069 "method": "bdev_nvme_attach_controller" 00:28:21.069 },{ 00:28:21.069 "params": { 00:28:21.069 "name": "Nvme5", 00:28:21.069 "trtype": "tcp", 00:28:21.069 "traddr": "10.0.0.2", 00:28:21.069 "adrfam": "ipv4", 00:28:21.070 "trsvcid": "4420", 00:28:21.070 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:21.070 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:21.070 "hdgst": false, 00:28:21.070 "ddgst": false 00:28:21.070 }, 00:28:21.070 "method": "bdev_nvme_attach_controller" 00:28:21.070 },{ 00:28:21.070 "params": { 00:28:21.070 "name": "Nvme6", 00:28:21.070 "trtype": "tcp", 00:28:21.070 "traddr": "10.0.0.2", 00:28:21.070 "adrfam": "ipv4", 00:28:21.070 "trsvcid": "4420", 00:28:21.070 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:21.070 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:21.070 "hdgst": false, 00:28:21.070 "ddgst": false 00:28:21.070 }, 00:28:21.070 "method": "bdev_nvme_attach_controller" 00:28:21.070 },{ 00:28:21.070 "params": { 00:28:21.070 "name": "Nvme7", 00:28:21.070 "trtype": "tcp", 00:28:21.070 "traddr": "10.0.0.2", 00:28:21.070 "adrfam": "ipv4", 00:28:21.070 "trsvcid": "4420", 00:28:21.070 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:21.070 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:21.070 "hdgst": false, 00:28:21.070 "ddgst": false 00:28:21.070 }, 00:28:21.070 "method": "bdev_nvme_attach_controller" 00:28:21.070 },{ 00:28:21.070 "params": { 00:28:21.070 "name": "Nvme8", 00:28:21.070 "trtype": "tcp", 00:28:21.070 "traddr": "10.0.0.2", 00:28:21.070 "adrfam": "ipv4", 00:28:21.070 "trsvcid": "4420", 00:28:21.070 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:21.070 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:21.070 "hdgst": false, 00:28:21.070 "ddgst": false 00:28:21.070 }, 00:28:21.070 "method": "bdev_nvme_attach_controller" 00:28:21.070 },{ 00:28:21.070 "params": { 00:28:21.070 "name": "Nvme9", 00:28:21.070 "trtype": "tcp", 00:28:21.070 "traddr": "10.0.0.2", 00:28:21.070 "adrfam": "ipv4", 00:28:21.070 "trsvcid": "4420", 00:28:21.070 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:21.070 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:21.070 "hdgst": false, 00:28:21.070 "ddgst": false 00:28:21.070 }, 00:28:21.070 "method": "bdev_nvme_attach_controller" 00:28:21.070 },{ 00:28:21.070 "params": { 00:28:21.070 "name": "Nvme10", 00:28:21.070 "trtype": "tcp", 00:28:21.070 "traddr": "10.0.0.2", 00:28:21.070 "adrfam": "ipv4", 00:28:21.070 "trsvcid": "4420", 00:28:21.070 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:21.070 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:21.070 "hdgst": false, 00:28:21.070 "ddgst": false 00:28:21.070 }, 00:28:21.070 "method": "bdev_nvme_attach_controller" 00:28:21.070 }' 00:28:21.070 [2024-11-20 08:25:25.601226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.070 [2024-11-20 08:25:25.637701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.624 Running I/O for 10 seconds... 00:28:22.624 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.624 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:22.625 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:22.625 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.625 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:22.885 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:23.145 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:23.145 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:23.145 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:23.145 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:23.145 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.145 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:23.145 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.145 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:23.145 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:23.145 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2077902 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2077902 ']' 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2077902 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2077902 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2077902' 00:28:23.407 killing process with pid 2077902 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2077902 00:28:23.407 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2077902 00:28:23.407 [2024-11-20 08:25:28.126817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.407 [2024-11-20 08:25:28.126958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.126962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.126967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.126971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.126976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.126980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.126985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.126989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.126994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.126998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.127162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562bc0 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.408 [2024-11-20 08:25:28.128468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.128586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1543860 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.130995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.409 [2024-11-20 08:25:28.131106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.410 [2024-11-20 08:25:28.131110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.410 [2024-11-20 08:25:28.131115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563560 is same with the state(6) to be set 00:28:23.410 [2024-11-20 08:25:28.132240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.410 [2024-11-20 08:25:28.132262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.410 [2024-11-20 08:25:28.132268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.410 [2024-11-20 08:25:28.132273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.132578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563a50 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.133242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.133257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.133262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.133267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.133272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.133277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.686 [2024-11-20 08:25:28.133281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.133545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563f20 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.134464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.134479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.134484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.134489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.134495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.134499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.134504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.134509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.687 [2024-11-20 08:25:28.134514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.134767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15643f0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.688 [2024-11-20 08:25:28.135713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.135933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f32d0 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.140638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.689 [2024-11-20 08:25:28.140674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.689 [2024-11-20 08:25:28.140689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.689 [2024-11-20 08:25:28.140697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.689 [2024-11-20 08:25:28.140706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.689 [2024-11-20 08:25:28.140713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.689 [2024-11-20 08:25:28.140721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.689 [2024-11-20 08:25:28.140729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.689 [2024-11-20 08:25:28.140736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15aa610 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.140768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.689 [2024-11-20 08:25:28.140780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.689 [2024-11-20 08:25:28.140793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.689 [2024-11-20 08:25:28.140803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.689 [2024-11-20 08:25:28.140812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.689 [2024-11-20 08:25:28.140819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.689 [2024-11-20 08:25:28.140827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.689 [2024-11-20 08:25:28.140834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.689 [2024-11-20 08:25:28.140842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eb00 is same with the state(6) to be set 00:28:23.689 [2024-11-20 08:25:28.140873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.689 [2024-11-20 08:25:28.140883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.689 [2024-11-20 08:25:28.140892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.689 [2024-11-20 08:25:28.140899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.689 [2024-11-20 08:25:28.140907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.689 [2024-11-20 08:25:28.140919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.140928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.140935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.140942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167c960 is same with the state(6) to be set 00:28:23.690 [2024-11-20 08:25:28.140966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.140974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.140982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.140990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.140998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab89a0 is same with the state(6) to be set 00:28:23.690 [2024-11-20 08:25:28.141050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b01a10 is same with the state(6) to be set 00:28:23.690 [2024-11-20 08:25:28.141145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0380 is same with the state(6) to be set 00:28:23.690 [2024-11-20 08:25:28.141237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1692560 is same with the state(6) to be set 00:28:23.690 [2024-11-20 08:25:28.141320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.690 [2024-11-20 08:25:28.141386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c850 is same with the state(6) to be set 00:28:23.690 [2024-11-20 08:25:28.141491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.690 [2024-11-20 08:25:28.141504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.690 [2024-11-20 08:25:28.141530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.690 [2024-11-20 08:25:28.141547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.690 [2024-11-20 08:25:28.141564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.690 [2024-11-20 08:25:28.141580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.690 [2024-11-20 08:25:28.141598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.690 [2024-11-20 08:25:28.141615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.690 [2024-11-20 08:25:28.141632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.690 [2024-11-20 08:25:28.141649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.690 [2024-11-20 08:25:28.141665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.690 [2024-11-20 08:25:28.141675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.141983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.141990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.142000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.142007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.142016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.142024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.142033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.142040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.142050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.142057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.142067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.142074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.142083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.142090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.142100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.142107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.142116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.142123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.691 [2024-11-20 08:25:28.142133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.691 [2024-11-20 08:25:28.142140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.142574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.142582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189a0c0 is same with the state(6) to be set 00:28:23.692 [2024-11-20 08:25:28.144562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:23.692 [2024-11-20 08:25:28.144595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1692560 (9): Bad file descriptor 00:28:23.692 [2024-11-20 08:25:28.145305] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:23.692 [2024-11-20 08:25:28.145352] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:23.692 [2024-11-20 08:25:28.145387] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:23.692 [2024-11-20 08:25:28.145429] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:23.692 [2024-11-20 08:25:28.145565] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:23.692 [2024-11-20 08:25:28.145784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.692 [2024-11-20 08:25:28.145802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1692560 with addr=10.0.0.2, port=4420 00:28:23.692 [2024-11-20 08:25:28.145811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1692560 is same with the state(6) to be set 00:28:23.692 [2024-11-20 08:25:28.145847] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:23.692 [2024-11-20 08:25:28.145885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.145897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.145910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.145918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.692 [2024-11-20 08:25:28.145928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.692 [2024-11-20 08:25:28.145935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.145945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.145952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.145962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.145969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.145978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.145986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.145995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.693 [2024-11-20 08:25:28.146454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.693 [2024-11-20 08:25:28.146788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.693 [2024-11-20 08:25:28.146804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.693 [2024-11-20 08:25:28.146809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.146994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f37a0 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.694 [2024-11-20 08:25:28.147638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.147833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3c70 is same with the state(6) to be set 00:28:23.695 [2024-11-20 08:25:28.156077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.695 [2024-11-20 08:25:28.156121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.695 [2024-11-20 08:25:28.156130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.695 [2024-11-20 08:25:28.156141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.695 [2024-11-20 08:25:28.156150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.695 [2024-11-20 08:25:28.156160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.695 [2024-11-20 08:25:28.156167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.695 [2024-11-20 08:25:28.156177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.695 [2024-11-20 08:25:28.156185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.695 [2024-11-20 08:25:28.156200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.695 [2024-11-20 08:25:28.156208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.695 [2024-11-20 08:25:28.156218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.695 [2024-11-20 08:25:28.156225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.695 [2024-11-20 08:25:28.156235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.695 [2024-11-20 08:25:28.156243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.695 [2024-11-20 08:25:28.156253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.695 [2024-11-20 08:25:28.156260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.695 [2024-11-20 08:25:28.156270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.695 [2024-11-20 08:25:28.156277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.695 [2024-11-20 08:25:28.156287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.695 [2024-11-20 08:25:28.156294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.695 [2024-11-20 08:25:28.156304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.695 [2024-11-20 08:25:28.156311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.695 [2024-11-20 08:25:28.156321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.695 [2024-11-20 08:25:28.156328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.696 [2024-11-20 08:25:28.156635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.156644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a932c0 is same with the state(6) to be set 00:28:23.696 [2024-11-20 08:25:28.156994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1692560 (9): Bad file descriptor 00:28:23.696 [2024-11-20 08:25:28.157052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.696 [2024-11-20 08:25:28.157064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.157073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.696 [2024-11-20 08:25:28.157080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.157089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.696 [2024-11-20 08:25:28.157096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.157104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.696 [2024-11-20 08:25:28.157111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.696 [2024-11-20 08:25:28.157119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee320 is same with the state(6) to be set 00:28:23.696 [2024-11-20 08:25:28.157142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15aa610 (9): Bad file descriptor 00:28:23.696 [2024-11-20 08:25:28.157162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167eb00 (9): Bad file descriptor 00:28:23.696 [2024-11-20 08:25:28.157179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167c960 (9): Bad file descriptor 00:28:23.696 [2024-11-20 08:25:28.157195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab89a0 (9): Bad file descriptor 00:28:23.696 [2024-11-20 08:25:28.157213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b01a10 (9): Bad file descriptor 00:28:23.696 [2024-11-20 08:25:28.157241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.696 [2024-11-20 08:25:28.157250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.157259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.697 [2024-11-20 08:25:28.157267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.157275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.697 [2024-11-20 08:25:28.157282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.157290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.697 [2024-11-20 08:25:28.157298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.157309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee500 is same with the state(6) to be set 00:28:23.697 [2024-11-20 08:25:28.157326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac0380 (9): Bad file descriptor 00:28:23.697 [2024-11-20 08:25:28.157346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c850 (9): Bad file descriptor 00:28:23.697 [2024-11-20 08:25:28.157360] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:28:23.697 [2024-11-20 08:25:28.158790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:23.697 [2024-11-20 08:25:28.158829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:23.697 [2024-11-20 08:25:28.158839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:23.697 [2024-11-20 08:25:28.158851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:23.697 [2024-11-20 08:25:28.158869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:23.697 [2024-11-20 08:25:28.159404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.697 [2024-11-20 08:25:28.159441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c850 with addr=10.0.0.2, port=4420 00:28:23.697 [2024-11-20 08:25:28.159455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c850 is same with the state(6) to be set 00:28:23.697 [2024-11-20 08:25:28.159907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c850 (9): Bad file descriptor 00:28:23.697 [2024-11-20 08:25:28.159984] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:23.697 [2024-11-20 08:25:28.160025] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:23.697 [2024-11-20 08:25:28.160041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:23.697 [2024-11-20 08:25:28.160050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:23.697 [2024-11-20 08:25:28.160059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:23.697 [2024-11-20 08:25:28.160068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:23.697 [2024-11-20 08:25:28.160128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.697 [2024-11-20 08:25:28.160457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.697 [2024-11-20 08:25:28.160464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.160987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.160995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.698 [2024-11-20 08:25:28.161004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.698 [2024-11-20 08:25:28.161012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a97120 is same with the state(6) to be set 00:28:23.699 [2024-11-20 08:25:28.161301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.699 [2024-11-20 08:25:28.161576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.699 [2024-11-20 08:25:28.161586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.161988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.161999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.162007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.162016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.162023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.162032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.162040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.162049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.162056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.162067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.162075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.162084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.166573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.166613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.166624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.166634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.166642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.166652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.166659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.166669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.166676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.166686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.700 [2024-11-20 08:25:28.166693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.700 [2024-11-20 08:25:28.166703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.166710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.166720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.166732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.166742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.166749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.166759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.166766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.166776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.166783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.166793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.166800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.166810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.166817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.166826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.166834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.166843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.166851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.166860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.166886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.166896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.166903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.166912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.166920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.166929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.166937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.166946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a984c0 is same with the state(6) to be set 00:28:23.701 [2024-11-20 08:25:28.167089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aee320 (9): Bad file descriptor 00:28:23.701 [2024-11-20 08:25:28.167111] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:23.701 [2024-11-20 08:25:28.167152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aee500 (9): Bad file descriptor 00:28:23.701 [2024-11-20 08:25:28.169657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:23.701 [2024-11-20 08:25:28.169723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.169736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.169749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.169758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.169770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.169779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.169791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.169800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.169811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.169819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.169828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.169835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.169845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.169853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.169866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.169874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.169884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.169891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.169901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.169909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.169918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.169925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.169935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.169947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.169956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.169964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.169973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.169981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.169991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.169998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.170007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.170015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.170025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.701 [2024-11-20 08:25:28.170032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.701 [2024-11-20 08:25:28.170042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.702 [2024-11-20 08:25:28.170530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.702 [2024-11-20 08:25:28.170540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.170844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.170852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898ef0 is same with the state(6) to be set 00:28:23.703 [2024-11-20 08:25:28.172140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.703 [2024-11-20 08:25:28.172441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.703 [2024-11-20 08:25:28.172451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.172984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.172991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.173001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.173008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.173018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.173025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.173035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.704 [2024-11-20 08:25:28.173043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.704 [2024-11-20 08:25:28.173052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.173060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.173069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.173077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.173086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.173093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.173103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.173110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.173120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.173128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.173137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.173145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.173155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.173162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.173172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.173179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.173190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.173198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.173207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.173215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.173224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.173232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.173241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.173249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.173257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a946d0 is same with the state(6) to be set 00:28:23.705 [2024-11-20 08:25:28.174530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-20 08:25:28.174887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.705 [2024-11-20 08:25:28.174897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.174906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.174915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.174923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.174932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.174940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.174949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.174957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.174966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.174974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.174983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.174991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.706 [2024-11-20 08:25:28.175459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.706 [2024-11-20 08:25:28.175467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.175476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.175484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.175493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.175501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.175510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.175518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.175527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.175535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.175546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.175554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.175563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.175571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.175580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.175588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.175597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.175605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.175614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.175622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.175631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.175639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.175647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95b40 is same with the state(6) to be set 00:28:23.707 [2024-11-20 08:25:28.176934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.176948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.176959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.176967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.176977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.176985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.176994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.177002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.177011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.177018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.177028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.177035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.177048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.177055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.177064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.177072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.177081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.177089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.177098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.177105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.177115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.177122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.177131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.177139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.177148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.177156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.177165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.177172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.177182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.177189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.177198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.177206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.177215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.177223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.177232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.707 [2024-11-20 08:25:28.177240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.707 [2024-11-20 08:25:28.177249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.708 [2024-11-20 08:25:28.177833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.708 [2024-11-20 08:25:28.177841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.177850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.177857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.177871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.177879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.177888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.177896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.177909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.177917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.177927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.177934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.177943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.177951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.177960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.177968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.177978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.177985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.177994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.178002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.178011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.178019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.178028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.178036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.178044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96f40 is same with the state(6) to be set 00:28:23.709 [2024-11-20 08:25:28.179361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.709 [2024-11-20 08:25:28.179775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.709 [2024-11-20 08:25:28.179783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.179792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.179800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.179809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.179817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.179826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.179834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.179843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.179851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.179860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.179875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.179885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.179894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.179903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.179911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.179920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.179927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.179937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.179945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.179954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.179962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.179971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.179979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.179988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.179996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.710 [2024-11-20 08:25:28.180386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.710 [2024-11-20 08:25:28.180394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.711 [2024-11-20 08:25:28.180404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.711 [2024-11-20 08:25:28.180412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.711 [2024-11-20 08:25:28.180422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.711 [2024-11-20 08:25:28.180430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.711 [2024-11-20 08:25:28.180439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.711 [2024-11-20 08:25:28.180447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.711 [2024-11-20 08:25:28.180457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.711 [2024-11-20 08:25:28.180464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.711 [2024-11-20 08:25:28.180474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.711 [2024-11-20 08:25:28.180481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.711 [2024-11-20 08:25:28.180490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a99c20 is same with the state(6) to be set 00:28:23.711 [2024-11-20 08:25:28.181737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:23.711 [2024-11-20 08:25:28.181758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:23.711 [2024-11-20 08:25:28.181776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:23.711 [2024-11-20 08:25:28.182358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.711 [2024-11-20 08:25:28.182400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15aa610 with addr=10.0.0.2, port=4420 00:28:23.711 [2024-11-20 08:25:28.182412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15aa610 is same with the state(6) to be set 00:28:23.711 [2024-11-20 08:25:28.182457] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:28:23.711 [2024-11-20 08:25:28.182472] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:28:23.711 [2024-11-20 08:25:28.182484] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:23.711 [2024-11-20 08:25:28.182504] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:23.711 [2024-11-20 08:25:28.182517] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:28:23.711 [2024-11-20 08:25:28.182530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15aa610 (9): Bad file descriptor 00:28:23.711 [2024-11-20 08:25:28.183143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:23.711 [2024-11-20 08:25:28.183161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:23.711 [2024-11-20 08:25:28.183171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:23.711 [2024-11-20 08:25:28.183180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:23.711 [2024-11-20 08:25:28.183189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:23.711 [2024-11-20 08:25:28.183572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.711 [2024-11-20 08:25:28.183586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aee320 with addr=10.0.0.2, port=4420 00:28:23.711 [2024-11-20 08:25:28.183594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee320 is same with the state(6) to be set 00:28:23.711 [2024-11-20 08:25:28.184123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.711 [2024-11-20 08:25:28.184161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1692560 with addr=10.0.0.2, port=4420 00:28:23.711 [2024-11-20 08:25:28.184173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1692560 is same with the state(6) to be set 00:28:23.711 [2024-11-20 08:25:28.184547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.711 [2024-11-20 08:25:28.184558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x167eb00 with addr=10.0.0.2, port=4420 00:28:23.711 [2024-11-20 08:25:28.184566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eb00 is same with the state(6) to be set 00:28:23.711 [2024-11-20 08:25:28.186292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.711 [2024-11-20 08:25:28.186309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x167c960 with addr=10.0.0.2, port=4420 00:28:23.711 [2024-11-20 08:25:28.186317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167c960 is same with the state(6) to be set 00:28:23.711 [2024-11-20 08:25:28.186629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.711 [2024-11-20 08:25:28.186638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab89a0 with addr=10.0.0.2, port=4420 00:28:23.711 [2024-11-20 08:25:28.186646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab89a0 is same with the state(6) to be set 00:28:23.711 [2024-11-20 08:25:28.186813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.711 [2024-11-20 08:25:28.186823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac0380 with addr=10.0.0.2, port=4420 00:28:23.711 [2024-11-20 08:25:28.186830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0380 is same with the state(6) to be set 00:28:23.711 [2024-11-20 08:25:28.187005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.711 [2024-11-20 08:25:28.187015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b01a10 with addr=10.0.0.2, port=4420 00:28:23.711 [2024-11-20 08:25:28.187022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b01a10 is same with the state(6) to be set 00:28:23.711 [2024-11-20 08:25:28.187242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.711 [2024-11-20 08:25:28.187251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c850 with addr=10.0.0.2, port=4420 00:28:23.711 [2024-11-20 08:25:28.187259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c850 is same with the state(6) to be set 00:28:23.711 [2024-11-20 08:25:28.187269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aee320 (9): Bad file descriptor 00:28:23.711 [2024-11-20 08:25:28.187280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1692560 (9): Bad file descriptor 00:28:23.711 [2024-11-20 08:25:28.187290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167eb00 (9): Bad file descriptor 00:28:23.711 [2024-11-20 08:25:28.187299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:23.711 [2024-11-20 08:25:28.187307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:23.711 [2024-11-20 08:25:28.187316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:23.711 [2024-11-20 08:25:28.187326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:23.711 [2024-11-20 08:25:28.187431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.711 [2024-11-20 08:25:28.187444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.711 [2024-11-20 08:25:28.187461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.711 [2024-11-20 08:25:28.187470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.711 [2024-11-20 08:25:28.187480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.711 [2024-11-20 08:25:28.187487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.711 [2024-11-20 08:25:28.187497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.711 [2024-11-20 08:25:28.187504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.712 [2024-11-20 08:25:28.187956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.712 [2024-11-20 08:25:28.187964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.187973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.187981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.187990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.187997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.713 [2024-11-20 08:25:28.188539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.713 [2024-11-20 08:25:28.188546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.714 [2024-11-20 08:25:28.188554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a99600 is same with the state(6) to be set 00:28:23.714 task offset: 24576 on job bdev=Nvme2n1 fails 00:28:23.714 00:28:23.714 Latency(us) 00:28:23.714 [2024-11-20T07:25:28.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.714 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.714 Job: Nvme1n1 ended in about 0.97 seconds with error 00:28:23.714 Verification LBA range: start 0x0 length 0x400 00:28:23.714 Nvme1n1 : 0.97 131.63 8.23 65.82 0.00 320660.20 21299.20 274377.39 00:28:23.714 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.714 Job: Nvme2n1 ended in about 0.94 seconds with error 00:28:23.714 Verification LBA range: start 0x0 length 0x400 00:28:23.714 Nvme2n1 : 0.94 203.25 12.70 67.75 0.00 228635.07 3222.19 255153.49 00:28:23.714 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.714 Job: Nvme3n1 ended in about 0.96 seconds with error 00:28:23.714 Verification LBA range: start 0x0 length 0x400 00:28:23.714 Nvme3n1 : 0.96 200.21 12.51 66.74 0.00 227323.95 14854.83 234181.97 00:28:23.714 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.714 Job: Nvme4n1 ended in about 0.97 seconds with error 00:28:23.714 Verification LBA range: start 0x0 length 0x400 00:28:23.714 Nvme4n1 : 0.97 196.96 12.31 65.65 0.00 226467.41 27962.03 232434.35 00:28:23.714 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.714 Job: Nvme5n1 ended in about 0.98 seconds with error 00:28:23.714 Verification LBA range: start 0x0 length 0x400 00:28:23.714 Nvme5n1 : 0.98 130.99 8.19 65.49 0.00 296480.71 18131.63 251658.24 00:28:23.714 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.714 Job: Nvme6n1 ended in about 0.98 seconds with error 00:28:23.714 Verification LBA range: start 0x0 length 0x400 00:28:23.714 Nvme6n1 : 0.98 196.00 12.25 65.33 0.00 217998.72 17476.27 253405.87 00:28:23.714 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.714 Job: Nvme7n1 ended in about 0.97 seconds with error 00:28:23.714 Verification LBA range: start 0x0 length 0x400 00:28:23.714 Nvme7n1 : 0.97 198.19 12.39 66.06 0.00 210451.20 17694.72 249910.61 00:28:23.714 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.714 Job: Nvme8n1 ended in about 0.97 seconds with error 00:28:23.714 Verification LBA range: start 0x0 length 0x400 00:28:23.714 Nvme8n1 : 0.97 197.95 12.37 65.98 0.00 205926.61 11796.48 251658.24 00:28:23.714 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.714 Job: Nvme9n1 ended in about 0.99 seconds with error 00:28:23.714 Verification LBA range: start 0x0 length 0x400 00:28:23.714 Nvme9n1 : 0.99 133.32 8.33 64.64 0.00 269386.95 18459.31 267386.88 00:28:23.714 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.714 Job: Nvme10n1 ended in about 0.98 seconds with error 00:28:23.714 Verification LBA range: start 0x0 length 0x400 00:28:23.714 Nvme10n1 : 0.98 130.34 8.15 65.17 0.00 265992.53 21408.43 251658.24 00:28:23.714 [2024-11-20T07:25:28.443Z] =================================================================================================================== 00:28:23.714 [2024-11-20T07:25:28.443Z] Total : 1718.85 107.43 658.65 0.00 242401.66 3222.19 274377.39 00:28:23.714 [2024-11-20 08:25:28.215212] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:23.714 [2024-11-20 08:25:28.215250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:23.714 [2024-11-20 08:25:28.215286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167c960 (9): Bad file descriptor 00:28:23.714 [2024-11-20 08:25:28.215300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab89a0 (9): Bad file descriptor 00:28:23.714 [2024-11-20 08:25:28.215310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac0380 (9): Bad file descriptor 00:28:23.714 [2024-11-20 08:25:28.215320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b01a10 (9): Bad file descriptor 00:28:23.714 [2024-11-20 08:25:28.215330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c850 (9): Bad file descriptor 00:28:23.714 [2024-11-20 08:25:28.215339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:23.714 [2024-11-20 08:25:28.215346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:23.714 [2024-11-20 08:25:28.215355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:23.714 [2024-11-20 08:25:28.215364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:23.714 [2024-11-20 08:25:28.215372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:23.714 [2024-11-20 08:25:28.215378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:23.714 [2024-11-20 08:25:28.215385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:23.714 [2024-11-20 08:25:28.215392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:23.714 [2024-11-20 08:25:28.215399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:23.714 [2024-11-20 08:25:28.215406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:23.714 [2024-11-20 08:25:28.215418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:23.714 [2024-11-20 08:25:28.215424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:23.714 [2024-11-20 08:25:28.215951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.714 [2024-11-20 08:25:28.215970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aee500 with addr=10.0.0.2, port=4420 00:28:23.714 [2024-11-20 08:25:28.215980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee500 is same with the state(6) to be set 00:28:23.714 [2024-11-20 08:25:28.215988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:23.714 [2024-11-20 08:25:28.215994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:23.714 [2024-11-20 08:25:28.216002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:23.714 [2024-11-20 08:25:28.216009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:23.714 [2024-11-20 08:25:28.216016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:23.714 [2024-11-20 08:25:28.216022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:23.714 [2024-11-20 08:25:28.216029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:23.714 [2024-11-20 08:25:28.216036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:23.714 [2024-11-20 08:25:28.216043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:23.714 [2024-11-20 08:25:28.216050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:23.714 [2024-11-20 08:25:28.216057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:23.714 [2024-11-20 08:25:28.216063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:23.714 [2024-11-20 08:25:28.216070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:23.714 [2024-11-20 08:25:28.216077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:23.714 [2024-11-20 08:25:28.216084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:23.714 [2024-11-20 08:25:28.216091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:23.714 [2024-11-20 08:25:28.216098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:23.714 [2024-11-20 08:25:28.216104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:23.714 [2024-11-20 08:25:28.216111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:23.714 [2024-11-20 08:25:28.216118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:23.715 [2024-11-20 08:25:28.216518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aee500 (9): Bad file descriptor 00:28:23.715 [2024-11-20 08:25:28.216570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:23.715 [2024-11-20 08:25:28.216583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:23.715 [2024-11-20 08:25:28.216592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:23.715 [2024-11-20 08:25:28.216605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:23.715 [2024-11-20 08:25:28.216613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:23.715 [2024-11-20 08:25:28.216650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:23.715 [2024-11-20 08:25:28.216658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:23.715 [2024-11-20 08:25:28.216664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:23.715 [2024-11-20 08:25:28.216671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:23.715 [2024-11-20 08:25:28.216698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:23.715 [2024-11-20 08:25:28.216707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:23.715 [2024-11-20 08:25:28.216716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:23.715 [2024-11-20 08:25:28.216725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:23.715 [2024-11-20 08:25:28.217085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.715 [2024-11-20 08:25:28.217099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15aa610 with addr=10.0.0.2, port=4420 00:28:23.715 [2024-11-20 08:25:28.217106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15aa610 is same with the state(6) to be set 00:28:23.715 [2024-11-20 08:25:28.217172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.715 [2024-11-20 08:25:28.217183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x167eb00 with addr=10.0.0.2, port=4420 00:28:23.715 [2024-11-20 08:25:28.217191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eb00 is same with the state(6) to be set 00:28:23.715 [2024-11-20 08:25:28.217497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.715 [2024-11-20 08:25:28.217507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1692560 with addr=10.0.0.2, port=4420 00:28:23.715 [2024-11-20 08:25:28.217515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1692560 is same with the state(6) to be set 00:28:23.715 [2024-11-20 08:25:28.217712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.715 [2024-11-20 08:25:28.217722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aee320 with addr=10.0.0.2, port=4420 00:28:23.715 [2024-11-20 08:25:28.217729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee320 is same with the state(6) to be set 00:28:23.715 [2024-11-20 08:25:28.218092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.715 [2024-11-20 08:25:28.218102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c850 with addr=10.0.0.2, port=4420 00:28:23.715 [2024-11-20 08:25:28.218109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c850 is same with the state(6) to be set 00:28:23.715 [2024-11-20 08:25:28.218471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.715 [2024-11-20 08:25:28.218482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b01a10 with addr=10.0.0.2, port=4420 00:28:23.715 [2024-11-20 08:25:28.218489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b01a10 is same with the state(6) to be set 00:28:23.715 [2024-11-20 08:25:28.218823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.715 [2024-11-20 08:25:28.218833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac0380 with addr=10.0.0.2, port=4420 00:28:23.715 [2024-11-20 08:25:28.218844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0380 is same with the state(6) to be set 00:28:23.715 [2024-11-20 08:25:28.219163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.715 [2024-11-20 08:25:28.219174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab89a0 with addr=10.0.0.2, port=4420 00:28:23.715 [2024-11-20 08:25:28.219181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab89a0 is same with the state(6) to be set 00:28:23.715 [2024-11-20 08:25:28.219373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.715 [2024-11-20 08:25:28.219382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x167c960 with addr=10.0.0.2, port=4420 00:28:23.715 [2024-11-20 08:25:28.219389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167c960 is same with the state(6) to be set 00:28:23.715 [2024-11-20 08:25:28.219399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15aa610 (9): Bad file descriptor 00:28:23.715 [2024-11-20 08:25:28.219409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167eb00 (9): Bad file descriptor 00:28:23.715 [2024-11-20 08:25:28.219418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1692560 (9): Bad file descriptor 00:28:23.715 [2024-11-20 08:25:28.219427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aee320 (9): Bad file descriptor 00:28:23.715 [2024-11-20 08:25:28.219436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c850 (9): Bad file descriptor 00:28:23.715 [2024-11-20 08:25:28.219467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b01a10 (9): Bad file descriptor 00:28:23.715 [2024-11-20 08:25:28.219477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac0380 (9): Bad file descriptor 00:28:23.715 [2024-11-20 08:25:28.219486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab89a0 (9): Bad file descriptor 00:28:23.715 [2024-11-20 08:25:28.219496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167c960 (9): Bad file descriptor 00:28:23.715 [2024-11-20 08:25:28.219505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:23.715 [2024-11-20 08:25:28.219511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:23.715 [2024-11-20 08:25:28.219519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:23.715 [2024-11-20 08:25:28.219525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:23.715 [2024-11-20 08:25:28.219533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:23.715 [2024-11-20 08:25:28.219539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:23.715 [2024-11-20 08:25:28.219546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:23.715 [2024-11-20 08:25:28.219552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:23.715 [2024-11-20 08:25:28.219560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:23.715 [2024-11-20 08:25:28.219566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:23.715 [2024-11-20 08:25:28.219573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:23.715 [2024-11-20 08:25:28.219579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:23.715 [2024-11-20 08:25:28.219589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:23.715 [2024-11-20 08:25:28.219595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:23.715 [2024-11-20 08:25:28.219602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:23.715 [2024-11-20 08:25:28.219609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:23.715 [2024-11-20 08:25:28.219616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:23.715 [2024-11-20 08:25:28.219622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:23.715 [2024-11-20 08:25:28.219629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:23.715 [2024-11-20 08:25:28.219635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:23.716 [2024-11-20 08:25:28.219661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:23.716 [2024-11-20 08:25:28.219668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:23.716 [2024-11-20 08:25:28.219675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:23.716 [2024-11-20 08:25:28.219682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:23.716 [2024-11-20 08:25:28.219689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:23.716 [2024-11-20 08:25:28.219696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:23.716 [2024-11-20 08:25:28.219702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:23.716 [2024-11-20 08:25:28.219709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:23.716 [2024-11-20 08:25:28.219716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:23.716 [2024-11-20 08:25:28.219722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:23.716 [2024-11-20 08:25:28.219729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:23.716 [2024-11-20 08:25:28.219735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:23.716 [2024-11-20 08:25:28.219742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:23.716 [2024-11-20 08:25:28.219749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:23.716 [2024-11-20 08:25:28.219756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:23.716 [2024-11-20 08:25:28.219762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:23.716 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2078272 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2078272 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2078272 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@99 -- # sync 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # set +e 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:25.102 rmmod nvme_tcp 00:28:25.102 rmmod nvme_fabrics 00:28:25.102 rmmod nvme_keyring 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # set -e 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # return 0 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # '[' -n 2077902 ']' 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@337 -- # killprocess 2077902 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2077902 ']' 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2077902 00:28:25.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2077902) - No such process 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2077902 is not found' 00:28:25.102 Process with pid 2077902 is not found 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # nvmf_fini 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@254 -- # local dev 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:25.102 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@121 -- # return 0 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # _dev=0 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # dev_map=() 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@274 -- # iptr 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@548 -- # iptables-save 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@548 -- # iptables-restore 00:28:27.018 00:28:27.018 real 0m8.018s 00:28:27.018 user 0m19.542s 00:28:27.018 sys 0m1.335s 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:27.018 ************************************ 00:28:27.018 END TEST nvmf_shutdown_tc3 00:28:27.018 ************************************ 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:27.018 ************************************ 00:28:27.018 START TEST nvmf_shutdown_tc4 00:28:27.018 ************************************ 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@260 -- # remove_target_ns 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # xtrace_disable 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # pci_devs=() 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # net_devs=() 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # e810=() 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # local -ga e810 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # x722=() 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # local -ga x722 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # mlx=() 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # local -ga mlx 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.018 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:27.019 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:27.019 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:27.019 Found net devices under 0000:31:00.0: cvl_0_0 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:27.019 Found net devices under 0000:31:00.1: cvl_0_1 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # is_hw=yes 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@247 -- # create_target_ns 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@28 -- # local -g _dev 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # ips=() 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:27.019 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772161 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:27.280 10.0.0.1 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772162 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:27.280 10.0.0.2 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:27.280 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:27.281 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:27.281 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:27.281 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:27.281 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:27.281 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:27.281 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:27.281 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:27.281 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:27.281 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:27.281 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:27.281 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:27.281 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:27.281 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:27.281 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:27.281 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:27.281 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:27.542 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:27.542 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:27.542 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:27.542 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:27.542 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:27.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.538 ms 00:28:27.543 00:28:27.543 --- 10.0.0.1 ping statistics --- 00:28:27.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.543 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:27.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:28:27.543 00:28:27.543 --- 10.0.0.2 ping statistics --- 00:28:27.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.543 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@270 -- # return 0 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # return 1 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev= 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@160 -- # return 0 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:27.543 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=target1 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # return 1 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev= 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@160 -- # return 0 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # nvmfpid=2079571 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@329 -- # waitforlisten 2079571 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2079571 ']' 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.544 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:27.544 [2024-11-20 08:25:32.241507] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:28:27.544 [2024-11-20 08:25:32.241577] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.805 [2024-11-20 08:25:32.345822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:27.805 [2024-11-20 08:25:32.385355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.805 [2024-11-20 08:25:32.385393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.805 [2024-11-20 08:25:32.385399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.805 [2024-11-20 08:25:32.385405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.805 [2024-11-20 08:25:32.385409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.805 [2024-11-20 08:25:32.387127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.805 [2024-11-20 08:25:32.387286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:27.805 [2024-11-20 08:25:32.387446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.805 [2024-11-20 08:25:32.387447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:28.376 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.376 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:28.376 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:28.376 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:28.376 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:28.376 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.376 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:28.376 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.376 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:28.376 [2024-11-20 08:25:33.092205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.376 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.376 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:28.376 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:28.376 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:28.376 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:28.637 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:28.637 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.638 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:28.638 Malloc1 00:28:28.638 [2024-11-20 08:25:33.203871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.638 Malloc2 00:28:28.638 Malloc3 00:28:28.638 Malloc4 00:28:28.638 Malloc5 00:28:28.899 Malloc6 00:28:28.899 Malloc7 00:28:28.899 Malloc8 00:28:28.899 Malloc9 00:28:28.899 Malloc10 00:28:28.899 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.899 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:28.899 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:28.899 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:28.899 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2079834 00:28:28.899 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:28.899 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:29.159 [2024-11-20 08:25:33.663286] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:34.460 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:34.460 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2079571 00:28:34.460 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2079571 ']' 00:28:34.460 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2079571 00:28:34.460 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:28:34.460 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:34.460 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2079571 00:28:34.460 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:34.460 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:34.460 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2079571' 00:28:34.460 killing process with pid 2079571 00:28:34.460 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2079571 00:28:34.460 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2079571 00:28:34.460 [2024-11-20 08:25:38.681763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e1e0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.681807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e1e0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.681814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e1e0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.681820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e1e0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.681825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e1e0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.681830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e1e0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.681835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e1e0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.681856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e1e0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.681861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e1e0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.681872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e1e0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.682106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e6d0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.682138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e6d0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.682254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131eba0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.682276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131eba0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.682283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131eba0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.682288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131eba0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.682293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131eba0 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.682647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dd10 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.682673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dd10 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.682679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dd10 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.682684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dd10 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.682688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dd10 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.682693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dd10 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.682697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dd10 is same with the state(6) to be set 00:28:34.460 [2024-11-20 08:25:38.682702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dd10 is same with the state(6) to be set 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 starting I/O failed: -6 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 starting I/O failed: -6 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 starting I/O failed: -6 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 starting I/O failed: -6 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 starting I/O failed: -6 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 starting I/O failed: -6 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 starting I/O failed: -6 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 starting I/O failed: -6 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 starting I/O failed: -6 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.460 starting I/O failed: -6 00:28:34.460 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 [2024-11-20 08:25:38.684762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 [2024-11-20 08:25:38.685635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 [2024-11-20 08:25:38.686580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.461 starting I/O failed: -6 00:28:34.461 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 [2024-11-20 08:25:38.687533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f560 is same with starting I/O failed: -6 00:28:34.462 the state(6) to be set 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 [2024-11-20 08:25:38.687553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f560 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.687558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f560 is same with the state(6) to be set 00:28:34.462 starting I/O failed: -6 00:28:34.462 [2024-11-20 08:25:38.687563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f560 is same with the state(6) to be set 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 [2024-11-20 08:25:38.687569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f560 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.687575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f560 is same with the state(6) to be set 00:28:34.462 starting I/O failed: -6 00:28:34.462 [2024-11-20 08:25:38.687579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f560 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.687584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f560 is same with Write completed with error (sct=0, sc=8) 00:28:34.462 the state(6) to be set 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 [2024-11-20 08:25:38.687750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131fa50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.687770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131fa50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.687777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131fa50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.687784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131fa50 is same with the state(6) to be set 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 [2024-11-20 08:25:38.688006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.462 NVMe io qpair process completion error 00:28:34.462 [2024-11-20 08:25:38.688020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ff20 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.688036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ff20 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.688755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cc50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.688770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cc50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.688775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cc50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.688780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cc50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.688785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cc50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.688790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cc50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.688794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cc50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.688799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cc50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.688804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cc50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.688808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cc50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.688813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cc50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.688818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cc50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.688822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cc50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.688827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cc50 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.689128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d120 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.689146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d120 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.689153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d120 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.689160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d120 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.689166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d120 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.689173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d120 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.689181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d120 is same with the state(6) to be set 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 [2024-11-20 08:25:38.689411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d5f0 is same with the state(6) to be set 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 [2024-11-20 08:25:38.689425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d5f0 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.689434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d5f0 is same with the state(6) to be set 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 [2024-11-20 08:25:38.689438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d5f0 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.689445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d5f0 is same with the state(6) to be set 00:28:34.462 starting I/O failed: -6 00:28:34.462 [2024-11-20 08:25:38.689449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d5f0 is same with the state(6) to be set 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 [2024-11-20 08:25:38.689454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d5f0 is same with the state(6) to be set 00:28:34.462 [2024-11-20 08:25:38.689459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d5f0 is same with the state(6) to be set 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 Write completed with error (sct=0, sc=8) 00:28:34.462 starting I/O failed: -6 00:28:34.463 [2024-11-20 08:25:38.689800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321730 is same with the state(6) to be set 00:28:34.463 [2024-11-20 08:25:38.689819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321730 is same with the state(6) to be set 00:28:34.463 [2024-11-20 08:25:38.689824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321730 is same with the state(6) to be set 00:28:34.463 [2024-11-20 08:25:38.689829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321730 is same with the state(6) to be set 00:28:34.463 [2024-11-20 08:25:38.689834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321730 is same with the state(6) to be set 00:28:34.463 [2024-11-20 08:25:38.689840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321730 is same with the state(6) to be set 00:28:34.463 [2024-11-20 08:25:38.689844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321730 is same with the state(6) to be set 00:28:34.463 starting I/O failed: -6 00:28:34.463 starting I/O failed: -6 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 [2024-11-20 08:25:38.691049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.463 starting I/O failed: -6 00:28:34.463 starting I/O failed: -6 00:28:34.463 starting I/O failed: -6 00:28:34.463 starting I/O failed: -6 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 [2024-11-20 08:25:38.692440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.463 NVMe io qpair process completion error 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 starting I/O failed: -6 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.463 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 [2024-11-20 08:25:38.693419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 [2024-11-20 08:25:38.694246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 [2024-11-20 08:25:38.695204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.464 Write completed with error (sct=0, sc=8) 00:28:34.464 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 [2024-11-20 08:25:38.696660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.465 NVMe io qpair process completion error 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 [2024-11-20 08:25:38.698039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 Write completed with error (sct=0, sc=8) 00:28:34.465 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 [2024-11-20 08:25:38.698869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 [2024-11-20 08:25:38.699805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.466 [2024-11-20 08:25:38.702467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.466 NVMe io qpair process completion error 00:28:34.466 Write completed with error (sct=0, sc=8) 00:28:34.466 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 [2024-11-20 08:25:38.703546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.467 starting I/O failed: -6 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 [2024-11-20 08:25:38.704384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 [2024-11-20 08:25:38.705346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.467 Write completed with error (sct=0, sc=8) 00:28:34.467 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 [2024-11-20 08:25:38.707271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.468 NVMe io qpair process completion error 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 [2024-11-20 08:25:38.708413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.468 starting I/O failed: -6 00:28:34.468 starting I/O failed: -6 00:28:34.468 starting I/O failed: -6 00:28:34.468 starting I/O failed: -6 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 [2024-11-20 08:25:38.709421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.468 starting I/O failed: -6 00:28:34.468 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 [2024-11-20 08:25:38.710387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 [2024-11-20 08:25:38.713010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.469 NVMe io qpair process completion error 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 starting I/O failed: -6 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.469 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 [2024-11-20 08:25:38.714148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 [2024-11-20 08:25:38.715126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.470 Write completed with error (sct=0, sc=8) 00:28:34.470 starting I/O failed: -6 00:28:34.471 [2024-11-20 08:25:38.716075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 [2024-11-20 08:25:38.717543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.471 NVMe io qpair process completion error 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 [2024-11-20 08:25:38.718652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.471 starting I/O failed: -6 00:28:34.471 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 [2024-11-20 08:25:38.719511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 [2024-11-20 08:25:38.720465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.472 starting I/O failed: -6 00:28:34.472 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 [2024-11-20 08:25:38.722164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.473 NVMe io qpair process completion error 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 [2024-11-20 08:25:38.723836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.473 [2024-11-20 08:25:38.724778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.473 Write completed with error (sct=0, sc=8) 00:28:34.473 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 [2024-11-20 08:25:38.727785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.474 NVMe io qpair process completion error 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 [2024-11-20 08:25:38.728970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 Write completed with error (sct=0, sc=8) 00:28:34.474 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 [2024-11-20 08:25:38.729803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 [2024-11-20 08:25:38.730771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.475 Write completed with error (sct=0, sc=8) 00:28:34.475 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 starting I/O failed: -6 00:28:34.476 [2024-11-20 08:25:38.732632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.476 NVMe io qpair process completion error 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Write completed with error (sct=0, sc=8) 00:28:34.476 Initializing NVMe Controllers 00:28:34.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:34.476 Controller IO queue size 128, less than required. 00:28:34.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:34.476 Controller IO queue size 128, less than required. 00:28:34.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:34.476 Controller IO queue size 128, less than required. 00:28:34.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:34.476 Controller IO queue size 128, less than required. 00:28:34.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:34.476 Controller IO queue size 128, less than required. 00:28:34.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:34.476 Controller IO queue size 128, less than required. 00:28:34.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:34.476 Controller IO queue size 128, less than required. 00:28:34.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:34.476 Controller IO queue size 128, less than required. 00:28:34.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:34.476 Controller IO queue size 128, less than required. 00:28:34.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:34.476 Controller IO queue size 128, less than required. 00:28:34.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:34.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:34.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:34.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:34.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:34.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:34.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:34.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:34.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:34.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:34.476 Initialization complete. Launching workers. 00:28:34.476 ======================================================== 00:28:34.476 Latency(us) 00:28:34.476 Device Information : IOPS MiB/s Average min max 00:28:34.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1890.68 81.24 67719.66 542.46 126386.14 00:28:34.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1894.08 81.39 67635.08 965.78 129201.68 00:28:34.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1881.98 80.87 68090.27 695.97 123318.05 00:28:34.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1895.78 81.46 67611.70 921.54 121191.48 00:28:34.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1865.21 80.15 68761.99 729.01 134893.73 00:28:34.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1888.98 81.17 67218.20 885.93 123082.78 00:28:34.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1890.05 81.21 67467.95 609.91 122289.61 00:28:34.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1859.27 79.89 68328.21 861.99 122922.38 00:28:34.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1891.96 81.30 67170.01 663.77 120462.62 00:28:34.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1849.93 79.49 68731.83 713.21 122779.80 00:28:34.476 ======================================================== 00:28:34.476 Total : 18807.91 808.15 67869.48 542.46 134893.73 00:28:34.476 00:28:34.476 [2024-11-20 08:25:38.738058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b936b0 is same with the state(6) to be set 00:28:34.476 [2024-11-20 08:25:38.738102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92390 is same with the state(6) to be set 00:28:34.476 [2024-11-20 08:25:38.738132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b939e0 is same with the state(6) to be set 00:28:34.476 [2024-11-20 08:25:38.738161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94540 is same with the state(6) to be set 00:28:34.476 [2024-11-20 08:25:38.738190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93050 is same with the state(6) to be set 00:28:34.477 [2024-11-20 08:25:38.738219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94360 is same with the state(6) to be set 00:28:34.477 [2024-11-20 08:25:38.738247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93380 is same with the state(6) to be set 00:28:34.477 [2024-11-20 08:25:38.738276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92060 is same with the state(6) to be set 00:28:34.477 [2024-11-20 08:25:38.738304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b929f0 is same with the state(6) to be set 00:28:34.477 [2024-11-20 08:25:38.738331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b926c0 is same with the state(6) to be set 00:28:34.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:34.477 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:35.419 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2079834 00:28:35.419 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2079834 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2079834 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@99 -- # sync 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@102 -- # set +e 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:35.420 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:35.420 rmmod nvme_tcp 00:28:35.420 rmmod nvme_fabrics 00:28:35.420 rmmod nvme_keyring 00:28:35.420 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:35.420 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # set -e 00:28:35.420 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # return 0 00:28:35.420 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # '[' -n 2079571 ']' 00:28:35.420 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@337 -- # killprocess 2079571 00:28:35.420 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2079571 ']' 00:28:35.420 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2079571 00:28:35.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2079571) - No such process 00:28:35.420 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2079571 is not found' 00:28:35.420 Process with pid 2079571 is not found 00:28:35.420 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:35.420 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # nvmf_fini 00:28:35.420 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@254 -- # local dev 00:28:35.420 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:35.420 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:35.420 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:35.420 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@121 -- # return 0 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # _dev=0 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # dev_map=() 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@274 -- # iptr 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@548 -- # iptables-save 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@548 -- # iptables-restore 00:28:37.965 00:28:37.965 real 0m10.452s 00:28:37.965 user 0m27.878s 00:28:37.965 sys 0m4.170s 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:37.965 ************************************ 00:28:37.965 END TEST nvmf_shutdown_tc4 00:28:37.965 ************************************ 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:37.965 00:28:37.965 real 0m44.438s 00:28:37.965 user 1m44.003s 00:28:37.965 sys 0m14.718s 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:37.965 ************************************ 00:28:37.965 END TEST nvmf_shutdown 00:28:37.965 ************************************ 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:37.965 ************************************ 00:28:37.965 START TEST nvmf_nsid 00:28:37.965 ************************************ 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:37.965 * Looking for test storage... 00:28:37.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:28:37.965 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:37.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.966 --rc genhtml_branch_coverage=1 00:28:37.966 --rc genhtml_function_coverage=1 00:28:37.966 --rc genhtml_legend=1 00:28:37.966 --rc geninfo_all_blocks=1 00:28:37.966 --rc geninfo_unexecuted_blocks=1 00:28:37.966 00:28:37.966 ' 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:37.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.966 --rc genhtml_branch_coverage=1 00:28:37.966 --rc genhtml_function_coverage=1 00:28:37.966 --rc genhtml_legend=1 00:28:37.966 --rc geninfo_all_blocks=1 00:28:37.966 --rc geninfo_unexecuted_blocks=1 00:28:37.966 00:28:37.966 ' 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:37.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.966 --rc genhtml_branch_coverage=1 00:28:37.966 --rc genhtml_function_coverage=1 00:28:37.966 --rc genhtml_legend=1 00:28:37.966 --rc geninfo_all_blocks=1 00:28:37.966 --rc geninfo_unexecuted_blocks=1 00:28:37.966 00:28:37.966 ' 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:37.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.966 --rc genhtml_branch_coverage=1 00:28:37.966 --rc genhtml_function_coverage=1 00:28:37.966 --rc genhtml_legend=1 00:28:37.966 --rc geninfo_all_blocks=1 00:28:37.966 --rc geninfo_unexecuted_blocks=1 00:28:37.966 00:28:37.966 ' 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@50 -- # : 0 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:37.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@260 -- # remove_target_ns 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # xtrace_disable 00:28:37.966 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # pci_devs=() 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # net_devs=() 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # e810=() 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # local -ga e810 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # x722=() 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # local -ga x722 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # mlx=() 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # local -ga mlx 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:46.109 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:46.109 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:46.109 Found net devices under 0000:31:00.0: cvl_0_0 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:46.109 Found net devices under 0000:31:00.1: cvl_0_1 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # is_hw=yes 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@247 -- # create_target_ns 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:46.109 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@28 -- # local -g _dev 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772161 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:46.110 10.0.0.1 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772162 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:46.110 10.0.0.2 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:46.110 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:46.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.525 ms 00:28:46.372 00:28:46.372 --- 10.0.0.1 ping statistics --- 00:28:46.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.372 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:46.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:28:46.372 00:28:46.372 --- 10.0.0.2 ping statistics --- 00:28:46.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.372 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@270 -- # return 0 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # return 1 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev= 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@160 -- # return 0 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:46.372 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target1 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # return 1 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev= 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@160 -- # return 0 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # nvmfpid=2085884 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@329 -- # waitforlisten 2085884 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2085884 ']' 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.373 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:46.373 [2024-11-20 08:25:51.030720] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:28:46.373 [2024-11-20 08:25:51.030782] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.634 [2024-11-20 08:25:51.121763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.634 [2024-11-20 08:25:51.161818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.634 [2024-11-20 08:25:51.161850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.634 [2024-11-20 08:25:51.161859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.634 [2024-11-20 08:25:51.161872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.634 [2024-11-20 08:25:51.161877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.634 [2024-11-20 08:25:51.162473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2086002 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=03eecce7-7f0e-4fd6-aa52-3cd6b3113023 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=28cca619-b3bf-47c0-9e83-e881b56acc3c 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=b60f7176-cc74-40da-96f4-4967a635a3fb 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.206 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:47.206 null0 00:28:47.206 [2024-11-20 08:25:51.921368] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:28:47.206 [2024-11-20 08:25:51.921420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086002 ] 00:28:47.206 null1 00:28:47.466 null2 00:28:47.466 [2024-11-20 08:25:51.939777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.466 [2024-11-20 08:25:51.964007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.466 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.466 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2086002 /var/tmp/tgt2.sock 00:28:47.466 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2086002 ']' 00:28:47.466 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:28:47.466 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.466 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:28:47.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:28:47.466 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.466 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:47.466 [2024-11-20 08:25:52.018462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.466 [2024-11-20 08:25:52.054618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.726 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.726 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:47.726 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:28:47.986 [2024-11-20 08:25:52.528839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.986 [2024-11-20 08:25:52.544970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:28:47.986 nvme0n1 nvme0n2 00:28:47.987 nvme1n1 00:28:47.987 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:28:47.987 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:28:47.987 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:49.370 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:28:49.370 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:28:49.370 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:28:49.370 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:28:49.370 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:28:49.370 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:28:49.370 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:28:49.370 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:49.370 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:49.370 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:49.370 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:28:49.370 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:28:49.370 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:28:50.311 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:50.311 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 03eecce7-7f0e-4fd6-aa52-3cd6b3113023 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=03eecce77f0e4fd6aa523cd6b3113023 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 03EECCE77F0E4FD6AA523CD6B3113023 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 03EECCE77F0E4FD6AA523CD6B3113023 == \0\3\E\E\C\C\E\7\7\F\0\E\4\F\D\6\A\A\5\2\3\C\D\6\B\3\1\1\3\0\2\3 ]] 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 28cca619-b3bf-47c0-9e83-e881b56acc3c 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=28cca619b3bf47c09e83e881b56acc3c 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 28CCA619B3BF47C09E83E881B56ACC3C 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 28CCA619B3BF47C09E83E881B56ACC3C == \2\8\C\C\A\6\1\9\B\3\B\F\4\7\C\0\9\E\8\3\E\8\8\1\B\5\6\A\C\C\3\C ]] 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid b60f7176-cc74-40da-96f4-4967a635a3fb 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b60f7176cc7440da96f44967a635a3fb 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B60F7176CC7440DA96F44967A635A3FB 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ B60F7176CC7440DA96F44967A635A3FB == \B\6\0\F\7\1\7\6\C\C\7\4\4\0\D\A\9\6\F\4\4\9\6\7\A\6\3\5\A\3\F\B ]] 00:28:50.573 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:28:50.834 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:50.834 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:28:50.834 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2086002 00:28:50.834 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2086002 ']' 00:28:50.834 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2086002 00:28:50.834 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:50.834 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.834 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2086002 00:28:50.834 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:50.834 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:50.834 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2086002' 00:28:50.834 killing process with pid 2086002 00:28:50.834 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2086002 00:28:50.834 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2086002 00:28:51.096 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:28:51.096 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:51.096 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@99 -- # sync 00:28:51.096 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:51.096 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@102 -- # set +e 00:28:51.096 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:51.096 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:51.096 rmmod nvme_tcp 00:28:51.096 rmmod nvme_fabrics 00:28:51.096 rmmod nvme_keyring 00:28:51.096 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:51.356 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # set -e 00:28:51.356 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # return 0 00:28:51.356 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # '[' -n 2085884 ']' 00:28:51.356 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@337 -- # killprocess 2085884 00:28:51.356 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2085884 ']' 00:28:51.356 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2085884 00:28:51.356 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:51.356 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:51.356 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2085884 00:28:51.356 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:51.356 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:51.356 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2085884' 00:28:51.356 killing process with pid 2085884 00:28:51.356 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2085884 00:28:51.356 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2085884 00:28:51.356 08:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:51.356 08:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@342 -- # nvmf_fini 00:28:51.356 08:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@254 -- # local dev 00:28:51.356 08:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:51.356 08:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:51.356 08:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:51.356 08:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # return 0 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # _dev=0 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # dev_map=() 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@274 -- # iptr 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-restore 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-save 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:53.901 00:28:53.901 real 0m15.871s 00:28:53.901 user 0m11.658s 00:28:53.901 sys 0m7.405s 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:53.901 ************************************ 00:28:53.901 END TEST nvmf_nsid 00:28:53.901 ************************************ 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:53.901 00:28:53.901 real 13m27.126s 00:28:53.901 user 27m15.750s 00:28:53.901 sys 4m9.579s 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.901 08:25:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:53.901 ************************************ 00:28:53.901 END TEST nvmf_target_extra 00:28:53.901 ************************************ 00:28:53.901 08:25:58 nvmf_tcp -- nvmf/nvmf.sh@12 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:53.901 08:25:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:53.901 08:25:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.901 08:25:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:53.901 ************************************ 00:28:53.901 START TEST nvmf_host 00:28:53.901 ************************************ 00:28:53.901 08:25:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:53.901 * Looking for test storage... 00:28:53.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:53.901 08:25:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:53.901 08:25:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:53.901 08:25:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:53.901 08:25:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:53.901 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:53.901 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:53.901 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:53.901 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:53.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.902 --rc genhtml_branch_coverage=1 00:28:53.902 --rc genhtml_function_coverage=1 00:28:53.902 --rc genhtml_legend=1 00:28:53.902 --rc geninfo_all_blocks=1 00:28:53.902 --rc geninfo_unexecuted_blocks=1 00:28:53.902 00:28:53.902 ' 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:53.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.902 --rc genhtml_branch_coverage=1 00:28:53.902 --rc genhtml_function_coverage=1 00:28:53.902 --rc genhtml_legend=1 00:28:53.902 --rc geninfo_all_blocks=1 00:28:53.902 --rc geninfo_unexecuted_blocks=1 00:28:53.902 00:28:53.902 ' 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:53.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.902 --rc genhtml_branch_coverage=1 00:28:53.902 --rc genhtml_function_coverage=1 00:28:53.902 --rc genhtml_legend=1 00:28:53.902 --rc geninfo_all_blocks=1 00:28:53.902 --rc geninfo_unexecuted_blocks=1 00:28:53.902 00:28:53.902 ' 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:53.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.902 --rc genhtml_branch_coverage=1 00:28:53.902 --rc genhtml_function_coverage=1 00:28:53.902 --rc genhtml_legend=1 00:28:53.902 --rc geninfo_all_blocks=1 00:28:53.902 --rc geninfo_unexecuted_blocks=1 00:28:53.902 00:28:53.902 ' 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@50 -- # : 0 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:53.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.902 ************************************ 00:28:53.902 START TEST nvmf_aer 00:28:53.902 ************************************ 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:53.902 * Looking for test storage... 00:28:53.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:28:53.902 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:54.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.165 --rc genhtml_branch_coverage=1 00:28:54.165 --rc genhtml_function_coverage=1 00:28:54.165 --rc genhtml_legend=1 00:28:54.165 --rc geninfo_all_blocks=1 00:28:54.165 --rc geninfo_unexecuted_blocks=1 00:28:54.165 00:28:54.165 ' 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:54.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.165 --rc genhtml_branch_coverage=1 00:28:54.165 --rc genhtml_function_coverage=1 00:28:54.165 --rc genhtml_legend=1 00:28:54.165 --rc geninfo_all_blocks=1 00:28:54.165 --rc geninfo_unexecuted_blocks=1 00:28:54.165 00:28:54.165 ' 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:54.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.165 --rc genhtml_branch_coverage=1 00:28:54.165 --rc genhtml_function_coverage=1 00:28:54.165 --rc genhtml_legend=1 00:28:54.165 --rc geninfo_all_blocks=1 00:28:54.165 --rc geninfo_unexecuted_blocks=1 00:28:54.165 00:28:54.165 ' 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:54.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.165 --rc genhtml_branch_coverage=1 00:28:54.165 --rc genhtml_function_coverage=1 00:28:54.165 --rc genhtml_legend=1 00:28:54.165 --rc geninfo_all_blocks=1 00:28:54.165 --rc geninfo_unexecuted_blocks=1 00:28:54.165 00:28:54.165 ' 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:54.165 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@50 -- # : 0 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:54.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # remove_target_ns 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # xtrace_disable 00:28:54.166 08:25:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # pci_devs=() 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # net_devs=() 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # e810=() 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # local -ga e810 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # x722=() 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # local -ga x722 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # mlx=() 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # local -ga mlx 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:02.311 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:02.311 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:02.311 Found net devices under 0000:31:00.0: cvl_0_0 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:02.311 Found net devices under 0000:31:00.1: cvl_0_1 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # is_hw=yes 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@247 -- # create_target_ns 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@28 -- # local -g _dev 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # ips=() 00:29:02.311 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772161 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:02.312 10.0.0.1 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772162 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:02.312 10.0.0.2 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:02.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:02.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.551 ms 00:29:02.312 00:29:02.312 --- 10.0.0.1 ping statistics --- 00:29:02.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.312 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target0 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:29:02.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:29:02.312 00:29:02.312 --- 10.0.0.2 ping statistics --- 00:29:02.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.312 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # return 0 00:29:02.312 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # return 1 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev= 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@160 -- # return 0 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target0 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target1 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # return 1 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev= 00:29:02.313 08:26:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@160 -- # return 0 00:29:02.313 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:29:02.313 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:29:02.313 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.313 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:02.313 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:02.313 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.313 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:02.313 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:02.574 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:02.574 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:02.574 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:02.574 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:02.574 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # nvmfpid=2091722 00:29:02.574 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # waitforlisten 2091722 00:29:02.574 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:02.574 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2091722 ']' 00:29:02.574 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.574 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.574 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.574 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.574 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:02.574 [2024-11-20 08:26:07.107806] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:29:02.574 [2024-11-20 08:26:07.107885] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.574 [2024-11-20 08:26:07.198535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:02.574 [2024-11-20 08:26:07.241239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.574 [2024-11-20 08:26:07.241275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.574 [2024-11-20 08:26:07.241283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.574 [2024-11-20 08:26:07.241289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.574 [2024-11-20 08:26:07.241296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.574 [2024-11-20 08:26:07.243176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.574 [2024-11-20 08:26:07.243294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.574 [2024-11-20 08:26:07.243446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.574 [2024-11-20 08:26:07.243446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:03.517 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.517 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:03.517 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:03.517 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:03.517 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:03.517 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:03.517 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:03.517 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.517 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:03.517 [2024-11-20 08:26:07.961798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.517 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.517 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:03.517 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.517 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:03.517 Malloc0 00:29:03.517 08:26:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:03.517 [2024-11-20 08:26:08.032215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:03.517 [ 00:29:03.517 { 00:29:03.517 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:03.517 "subtype": "Discovery", 00:29:03.517 "listen_addresses": [], 00:29:03.517 "allow_any_host": true, 00:29:03.517 "hosts": [] 00:29:03.517 }, 00:29:03.517 { 00:29:03.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.517 "subtype": "NVMe", 00:29:03.517 "listen_addresses": [ 00:29:03.517 { 00:29:03.517 "trtype": "TCP", 00:29:03.517 "adrfam": "IPv4", 00:29:03.517 "traddr": "10.0.0.2", 00:29:03.517 "trsvcid": "4420" 00:29:03.517 } 00:29:03.517 ], 00:29:03.517 "allow_any_host": true, 00:29:03.517 "hosts": [], 00:29:03.517 "serial_number": "SPDK00000000000001", 00:29:03.517 "model_number": "SPDK bdev Controller", 00:29:03.517 "max_namespaces": 2, 00:29:03.517 "min_cntlid": 1, 00:29:03.517 "max_cntlid": 65519, 00:29:03.517 "namespaces": [ 00:29:03.517 { 00:29:03.517 "nsid": 1, 00:29:03.517 "bdev_name": "Malloc0", 00:29:03.517 "name": "Malloc0", 00:29:03.517 "nguid": "F3B5B427B17E418FB782EF100D679685", 00:29:03.517 "uuid": "f3b5b427-b17e-418f-b782-ef100d679685" 00:29:03.517 } 00:29:03.517 ] 00:29:03.517 } 00:29:03.517 ] 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2091913 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:03.517 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:03.518 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:03.779 Malloc1 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:03.779 Asynchronous Event Request test 00:29:03.779 Attaching to 10.0.0.2 00:29:03.779 Attached to 10.0.0.2 00:29:03.779 Registering asynchronous event callbacks... 00:29:03.779 Starting namespace attribute notice tests for all controllers... 00:29:03.779 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:03.779 aer_cb - Changed Namespace 00:29:03.779 Cleaning up... 00:29:03.779 [ 00:29:03.779 { 00:29:03.779 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:03.779 "subtype": "Discovery", 00:29:03.779 "listen_addresses": [], 00:29:03.779 "allow_any_host": true, 00:29:03.779 "hosts": [] 00:29:03.779 }, 00:29:03.779 { 00:29:03.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.779 "subtype": "NVMe", 00:29:03.779 "listen_addresses": [ 00:29:03.779 { 00:29:03.779 "trtype": "TCP", 00:29:03.779 "adrfam": "IPv4", 00:29:03.779 "traddr": "10.0.0.2", 00:29:03.779 "trsvcid": "4420" 00:29:03.779 } 00:29:03.779 ], 00:29:03.779 "allow_any_host": true, 00:29:03.779 "hosts": [], 00:29:03.779 "serial_number": "SPDK00000000000001", 00:29:03.779 "model_number": "SPDK bdev Controller", 00:29:03.779 "max_namespaces": 2, 00:29:03.779 "min_cntlid": 1, 00:29:03.779 "max_cntlid": 65519, 00:29:03.779 "namespaces": [ 00:29:03.779 { 00:29:03.779 "nsid": 1, 00:29:03.779 "bdev_name": "Malloc0", 00:29:03.779 "name": "Malloc0", 00:29:03.779 "nguid": "F3B5B427B17E418FB782EF100D679685", 00:29:03.779 "uuid": "f3b5b427-b17e-418f-b782-ef100d679685" 00:29:03.779 }, 00:29:03.779 { 00:29:03.779 "nsid": 2, 00:29:03.779 "bdev_name": "Malloc1", 00:29:03.779 "name": "Malloc1", 00:29:03.779 "nguid": "37E8830756824B6FB6C93D8E1CB1C193", 00:29:03.779 "uuid": "37e88307-5682-4b6f-b6c9-3d8e1cb1c193" 00:29:03.779 } 00:29:03.779 ] 00:29:03.779 } 00:29:03.779 ] 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2091913 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@99 -- # sync 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # set +e 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:03.779 rmmod nvme_tcp 00:29:03.779 rmmod nvme_fabrics 00:29:03.779 rmmod nvme_keyring 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # set -e 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # return 0 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # '[' -n 2091722 ']' 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@337 -- # killprocess 2091722 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2091722 ']' 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2091722 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.779 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2091722 00:29:04.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:04.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:04.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2091722' 00:29:04.042 killing process with pid 2091722 00:29:04.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2091722 00:29:04.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2091722 00:29:04.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:04.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # nvmf_fini 00:29:04.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@254 -- # local dev 00:29:04.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:04.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:04.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:04.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:06.590 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:06.590 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:06.590 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@121 -- # return 0 00:29:06.590 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:06.590 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:06.590 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:06.590 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:29:06.590 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:29:06.590 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:06.590 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:29:06.590 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # _dev=0 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # dev_map=() 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@274 -- # iptr 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # iptables-save 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # iptables-restore 00:29:06.591 00:29:06.591 real 0m12.246s 00:29:06.591 user 0m8.079s 00:29:06.591 sys 0m6.737s 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.591 ************************************ 00:29:06.591 END TEST nvmf_aer 00:29:06.591 ************************************ 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.591 ************************************ 00:29:06.591 START TEST nvmf_async_init 00:29:06.591 ************************************ 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:06.591 * Looking for test storage... 00:29:06.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:06.591 08:26:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:06.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.591 --rc genhtml_branch_coverage=1 00:29:06.591 --rc genhtml_function_coverage=1 00:29:06.591 --rc genhtml_legend=1 00:29:06.591 --rc geninfo_all_blocks=1 00:29:06.591 --rc geninfo_unexecuted_blocks=1 00:29:06.591 00:29:06.591 ' 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:06.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.591 --rc genhtml_branch_coverage=1 00:29:06.591 --rc genhtml_function_coverage=1 00:29:06.591 --rc genhtml_legend=1 00:29:06.591 --rc geninfo_all_blocks=1 00:29:06.591 --rc geninfo_unexecuted_blocks=1 00:29:06.591 00:29:06.591 ' 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:06.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.591 --rc genhtml_branch_coverage=1 00:29:06.591 --rc genhtml_function_coverage=1 00:29:06.591 --rc genhtml_legend=1 00:29:06.591 --rc geninfo_all_blocks=1 00:29:06.591 --rc geninfo_unexecuted_blocks=1 00:29:06.591 00:29:06.591 ' 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:06.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.591 --rc genhtml_branch_coverage=1 00:29:06.591 --rc genhtml_function_coverage=1 00:29:06.591 --rc genhtml_legend=1 00:29:06.591 --rc geninfo_all_blocks=1 00:29:06.591 --rc geninfo_unexecuted_blocks=1 00:29:06.591 00:29:06.591 ' 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:06.591 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@50 -- # : 0 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:06.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=2dddba0f10364d89a60ed15caf9c40a5 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # remove_target_ns 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # xtrace_disable 00:29:06.592 08:26:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # pci_devs=() 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # net_devs=() 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # e810=() 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # local -ga e810 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # x722=() 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # local -ga x722 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # mlx=() 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # local -ga mlx 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:14.739 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:14.739 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:14.739 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:14.740 Found net devices under 0000:31:00.0: cvl_0_0 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:14.740 Found net devices under 0000:31:00.1: cvl_0_1 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # is_hw=yes 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@247 -- # create_target_ns 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@28 -- # local -g _dev 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # ips=() 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772161 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:14.740 10.0.0.1 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772162 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:14.740 10.0.0.2 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:14.740 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:14.741 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:15.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:15.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.556 ms 00:29:15.002 00:29:15.002 --- 10.0.0.1 ping statistics --- 00:29:15.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.002 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target0 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:29:15.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:15.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:29:15.002 00:29:15.002 --- 10.0.0.2 ping statistics --- 00:29:15.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.002 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # return 0 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:15.002 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # return 1 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev= 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@160 -- # return 0 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target0 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target1 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # return 1 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev= 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@160 -- # return 0 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # nvmfpid=2096779 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # waitforlisten 2096779 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2096779 ']' 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.003 08:26:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.003 [2024-11-20 08:26:19.670460] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:29:15.003 [2024-11-20 08:26:19.670526] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:15.263 [2024-11-20 08:26:19.760263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.263 [2024-11-20 08:26:19.800569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.263 [2024-11-20 08:26:19.800602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.263 [2024-11-20 08:26:19.800610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:15.263 [2024-11-20 08:26:19.800616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:15.263 [2024-11-20 08:26:19.800622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.263 [2024-11-20 08:26:19.801235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.834 [2024-11-20 08:26:20.506365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.834 null0 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2dddba0f10364d89a60ed15caf9c40a5 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.834 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.094 [2024-11-20 08:26:20.566661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:16.094 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.094 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:16.094 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.094 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.095 nvme0n1 00:29:16.095 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.095 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:16.095 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.095 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.355 [ 00:29:16.355 { 00:29:16.355 "name": "nvme0n1", 00:29:16.355 "aliases": [ 00:29:16.355 "2dddba0f-1036-4d89-a60e-d15caf9c40a5" 00:29:16.355 ], 00:29:16.355 "product_name": "NVMe disk", 00:29:16.355 "block_size": 512, 00:29:16.355 "num_blocks": 2097152, 00:29:16.355 "uuid": "2dddba0f-1036-4d89-a60e-d15caf9c40a5", 00:29:16.355 "numa_id": 0, 00:29:16.355 "assigned_rate_limits": { 00:29:16.355 "rw_ios_per_sec": 0, 00:29:16.355 "rw_mbytes_per_sec": 0, 00:29:16.355 "r_mbytes_per_sec": 0, 00:29:16.355 "w_mbytes_per_sec": 0 00:29:16.355 }, 00:29:16.355 "claimed": false, 00:29:16.355 "zoned": false, 00:29:16.355 "supported_io_types": { 00:29:16.355 "read": true, 00:29:16.355 "write": true, 00:29:16.355 "unmap": false, 00:29:16.355 "flush": true, 00:29:16.355 "reset": true, 00:29:16.355 "nvme_admin": true, 00:29:16.355 "nvme_io": true, 00:29:16.355 "nvme_io_md": false, 00:29:16.355 "write_zeroes": true, 00:29:16.355 "zcopy": false, 00:29:16.355 "get_zone_info": false, 00:29:16.355 "zone_management": false, 00:29:16.355 "zone_append": false, 00:29:16.355 "compare": true, 00:29:16.355 "compare_and_write": true, 00:29:16.355 "abort": true, 00:29:16.355 "seek_hole": false, 00:29:16.355 "seek_data": false, 00:29:16.355 "copy": true, 00:29:16.355 "nvme_iov_md": false 00:29:16.355 }, 00:29:16.355 "memory_domains": [ 00:29:16.355 { 00:29:16.355 "dma_device_id": "system", 00:29:16.355 "dma_device_type": 1 00:29:16.355 } 00:29:16.355 ], 00:29:16.355 "driver_specific": { 00:29:16.355 "nvme": [ 00:29:16.355 { 00:29:16.355 "trid": { 00:29:16.355 "trtype": "TCP", 00:29:16.355 "adrfam": "IPv4", 00:29:16.355 "traddr": "10.0.0.2", 00:29:16.355 "trsvcid": "4420", 00:29:16.355 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:16.355 }, 00:29:16.355 "ctrlr_data": { 00:29:16.355 "cntlid": 1, 00:29:16.355 "vendor_id": "0x8086", 00:29:16.355 "model_number": "SPDK bdev Controller", 00:29:16.355 "serial_number": "00000000000000000000", 00:29:16.355 "firmware_revision": "25.01", 00:29:16.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:16.355 "oacs": { 00:29:16.355 "security": 0, 00:29:16.355 "format": 0, 00:29:16.355 "firmware": 0, 00:29:16.355 "ns_manage": 0 00:29:16.355 }, 00:29:16.355 "multi_ctrlr": true, 00:29:16.355 "ana_reporting": false 00:29:16.355 }, 00:29:16.355 "vs": { 00:29:16.355 "nvme_version": "1.3" 00:29:16.355 }, 00:29:16.355 "ns_data": { 00:29:16.355 "id": 1, 00:29:16.355 "can_share": true 00:29:16.355 } 00:29:16.355 } 00:29:16.355 ], 00:29:16.355 "mp_policy": "active_passive" 00:29:16.355 } 00:29:16.355 } 00:29:16.355 ] 00:29:16.355 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.355 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:16.355 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.355 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.355 [2024-11-20 08:26:20.843980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:16.355 [2024-11-20 08:26:20.844046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deb460 (9): Bad file descriptor 00:29:16.355 [2024-11-20 08:26:20.975962] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:16.355 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.355 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:16.355 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.355 08:26:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.355 [ 00:29:16.355 { 00:29:16.355 "name": "nvme0n1", 00:29:16.355 "aliases": [ 00:29:16.355 "2dddba0f-1036-4d89-a60e-d15caf9c40a5" 00:29:16.355 ], 00:29:16.355 "product_name": "NVMe disk", 00:29:16.355 "block_size": 512, 00:29:16.355 "num_blocks": 2097152, 00:29:16.355 "uuid": "2dddba0f-1036-4d89-a60e-d15caf9c40a5", 00:29:16.355 "numa_id": 0, 00:29:16.355 "assigned_rate_limits": { 00:29:16.355 "rw_ios_per_sec": 0, 00:29:16.355 "rw_mbytes_per_sec": 0, 00:29:16.355 "r_mbytes_per_sec": 0, 00:29:16.355 "w_mbytes_per_sec": 0 00:29:16.355 }, 00:29:16.355 "claimed": false, 00:29:16.355 "zoned": false, 00:29:16.355 "supported_io_types": { 00:29:16.355 "read": true, 00:29:16.355 "write": true, 00:29:16.355 "unmap": false, 00:29:16.355 "flush": true, 00:29:16.355 "reset": true, 00:29:16.355 "nvme_admin": true, 00:29:16.355 "nvme_io": true, 00:29:16.355 "nvme_io_md": false, 00:29:16.355 "write_zeroes": true, 00:29:16.355 "zcopy": false, 00:29:16.355 "get_zone_info": false, 00:29:16.355 "zone_management": false, 00:29:16.355 "zone_append": false, 00:29:16.355 "compare": true, 00:29:16.355 "compare_and_write": true, 00:29:16.355 "abort": true, 00:29:16.355 "seek_hole": false, 00:29:16.355 "seek_data": false, 00:29:16.355 "copy": true, 00:29:16.355 "nvme_iov_md": false 00:29:16.356 }, 00:29:16.356 "memory_domains": [ 00:29:16.356 { 00:29:16.356 "dma_device_id": "system", 00:29:16.356 "dma_device_type": 1 00:29:16.356 } 00:29:16.356 ], 00:29:16.356 "driver_specific": { 00:29:16.356 "nvme": [ 00:29:16.356 { 00:29:16.356 "trid": { 00:29:16.356 "trtype": "TCP", 00:29:16.356 "adrfam": "IPv4", 00:29:16.356 "traddr": "10.0.0.2", 00:29:16.356 "trsvcid": "4420", 00:29:16.356 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:16.356 }, 00:29:16.356 "ctrlr_data": { 00:29:16.356 "cntlid": 2, 00:29:16.356 "vendor_id": "0x8086", 00:29:16.356 "model_number": "SPDK bdev Controller", 00:29:16.356 "serial_number": "00000000000000000000", 00:29:16.356 "firmware_revision": "25.01", 00:29:16.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:16.356 "oacs": { 00:29:16.356 "security": 0, 00:29:16.356 "format": 0, 00:29:16.356 "firmware": 0, 00:29:16.356 "ns_manage": 0 00:29:16.356 }, 00:29:16.356 "multi_ctrlr": true, 00:29:16.356 "ana_reporting": false 00:29:16.356 }, 00:29:16.356 "vs": { 00:29:16.356 "nvme_version": "1.3" 00:29:16.356 }, 00:29:16.356 "ns_data": { 00:29:16.356 "id": 1, 00:29:16.356 "can_share": true 00:29:16.356 } 00:29:16.356 } 00:29:16.356 ], 00:29:16.356 "mp_policy": "active_passive" 00:29:16.356 } 00:29:16.356 } 00:29:16.356 ] 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.YXmI65lmcP 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.YXmI65lmcP 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.YXmI65lmcP 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.356 [2024-11-20 08:26:21.064671] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:16.356 [2024-11-20 08:26:21.064788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.356 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.617 [2024-11-20 08:26:21.088749] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:16.617 nvme0n1 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.617 [ 00:29:16.617 { 00:29:16.617 "name": "nvme0n1", 00:29:16.617 "aliases": [ 00:29:16.617 "2dddba0f-1036-4d89-a60e-d15caf9c40a5" 00:29:16.617 ], 00:29:16.617 "product_name": "NVMe disk", 00:29:16.617 "block_size": 512, 00:29:16.617 "num_blocks": 2097152, 00:29:16.617 "uuid": "2dddba0f-1036-4d89-a60e-d15caf9c40a5", 00:29:16.617 "numa_id": 0, 00:29:16.617 "assigned_rate_limits": { 00:29:16.617 "rw_ios_per_sec": 0, 00:29:16.617 "rw_mbytes_per_sec": 0, 00:29:16.617 "r_mbytes_per_sec": 0, 00:29:16.617 "w_mbytes_per_sec": 0 00:29:16.617 }, 00:29:16.617 "claimed": false, 00:29:16.617 "zoned": false, 00:29:16.617 "supported_io_types": { 00:29:16.617 "read": true, 00:29:16.617 "write": true, 00:29:16.617 "unmap": false, 00:29:16.617 "flush": true, 00:29:16.617 "reset": true, 00:29:16.617 "nvme_admin": true, 00:29:16.617 "nvme_io": true, 00:29:16.617 "nvme_io_md": false, 00:29:16.617 "write_zeroes": true, 00:29:16.617 "zcopy": false, 00:29:16.617 "get_zone_info": false, 00:29:16.617 "zone_management": false, 00:29:16.617 "zone_append": false, 00:29:16.617 "compare": true, 00:29:16.617 "compare_and_write": true, 00:29:16.617 "abort": true, 00:29:16.617 "seek_hole": false, 00:29:16.617 "seek_data": false, 00:29:16.617 "copy": true, 00:29:16.617 "nvme_iov_md": false 00:29:16.617 }, 00:29:16.617 "memory_domains": [ 00:29:16.617 { 00:29:16.617 "dma_device_id": "system", 00:29:16.617 "dma_device_type": 1 00:29:16.617 } 00:29:16.617 ], 00:29:16.617 "driver_specific": { 00:29:16.617 "nvme": [ 00:29:16.617 { 00:29:16.617 "trid": { 00:29:16.617 "trtype": "TCP", 00:29:16.617 "adrfam": "IPv4", 00:29:16.617 "traddr": "10.0.0.2", 00:29:16.617 "trsvcid": "4421", 00:29:16.617 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:16.617 }, 00:29:16.617 "ctrlr_data": { 00:29:16.617 "cntlid": 3, 00:29:16.617 "vendor_id": "0x8086", 00:29:16.617 "model_number": "SPDK bdev Controller", 00:29:16.617 "serial_number": "00000000000000000000", 00:29:16.617 "firmware_revision": "25.01", 00:29:16.617 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:16.617 "oacs": { 00:29:16.617 "security": 0, 00:29:16.617 "format": 0, 00:29:16.617 "firmware": 0, 00:29:16.617 "ns_manage": 0 00:29:16.617 }, 00:29:16.617 "multi_ctrlr": true, 00:29:16.617 "ana_reporting": false 00:29:16.617 }, 00:29:16.617 "vs": { 00:29:16.617 "nvme_version": "1.3" 00:29:16.617 }, 00:29:16.617 "ns_data": { 00:29:16.617 "id": 1, 00:29:16.617 "can_share": true 00:29:16.617 } 00:29:16.617 } 00:29:16.617 ], 00:29:16.617 "mp_policy": "active_passive" 00:29:16.617 } 00:29:16.617 } 00:29:16.617 ] 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.YXmI65lmcP 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@99 -- # sync 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # set +e 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:16.617 rmmod nvme_tcp 00:29:16.617 rmmod nvme_fabrics 00:29:16.617 rmmod nvme_keyring 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # set -e 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # return 0 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # '[' -n 2096779 ']' 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@337 -- # killprocess 2096779 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2096779 ']' 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2096779 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:16.617 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2096779 00:29:16.877 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:16.877 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:16.877 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2096779' 00:29:16.877 killing process with pid 2096779 00:29:16.877 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2096779 00:29:16.877 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2096779 00:29:16.878 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:16.878 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # nvmf_fini 00:29:16.878 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@254 -- # local dev 00:29:16.878 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:16.878 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:16.878 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:16.878 08:26:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@121 -- # return 0 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # _dev=0 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # dev_map=() 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@274 -- # iptr 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # iptables-save 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # iptables-restore 00:29:19.424 00:29:19.424 real 0m12.755s 00:29:19.424 user 0m4.483s 00:29:19.424 sys 0m6.812s 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.424 ************************************ 00:29:19.424 END TEST nvmf_async_init 00:29:19.424 ************************************ 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@20 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.424 ************************************ 00:29:19.424 START TEST nvmf_identify 00:29:19.424 ************************************ 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:19.424 * Looking for test storage... 00:29:19.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:19.424 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:19.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.425 --rc genhtml_branch_coverage=1 00:29:19.425 --rc genhtml_function_coverage=1 00:29:19.425 --rc genhtml_legend=1 00:29:19.425 --rc geninfo_all_blocks=1 00:29:19.425 --rc geninfo_unexecuted_blocks=1 00:29:19.425 00:29:19.425 ' 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:19.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.425 --rc genhtml_branch_coverage=1 00:29:19.425 --rc genhtml_function_coverage=1 00:29:19.425 --rc genhtml_legend=1 00:29:19.425 --rc geninfo_all_blocks=1 00:29:19.425 --rc geninfo_unexecuted_blocks=1 00:29:19.425 00:29:19.425 ' 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:19.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.425 --rc genhtml_branch_coverage=1 00:29:19.425 --rc genhtml_function_coverage=1 00:29:19.425 --rc genhtml_legend=1 00:29:19.425 --rc geninfo_all_blocks=1 00:29:19.425 --rc geninfo_unexecuted_blocks=1 00:29:19.425 00:29:19.425 ' 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:19.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.425 --rc genhtml_branch_coverage=1 00:29:19.425 --rc genhtml_function_coverage=1 00:29:19.425 --rc genhtml_legend=1 00:29:19.425 --rc geninfo_all_blocks=1 00:29:19.425 --rc geninfo_unexecuted_blocks=1 00:29:19.425 00:29:19.425 ' 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@50 -- # : 0 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:19.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # remove_target_ns 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:19.425 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:19.426 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # xtrace_disable 00:29:19.426 08:26:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:27.574 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:27.574 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # pci_devs=() 00:29:27.574 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:27.574 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:27.574 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:27.574 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:27.574 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:27.574 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # net_devs=() 00:29:27.574 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:27.574 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # e810=() 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # local -ga e810 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # x722=() 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # local -ga x722 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # mlx=() 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # local -ga mlx 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:27.575 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:27.575 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:27.575 Found net devices under 0000:31:00.0: cvl_0_0 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:27.575 Found net devices under 0000:31:00.1: cvl_0_1 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # is_hw=yes 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@247 -- # create_target_ns 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@28 -- # local -g _dev 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:27.575 08:26:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:27.575 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:29:27.575 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:27.575 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:27.575 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:29:27.575 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772161 00:29:27.575 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:27.575 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:29:27.575 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:27.575 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:27.575 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:27.575 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:29:27.575 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:27.575 10.0.0.1 00:29:27.575 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772162 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:27.576 10.0.0.2 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:27.576 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:27.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.552 ms 00:29:27.838 00:29:27.838 --- 10.0.0.1 ping statistics --- 00:29:27.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.838 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:29:27.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:29:27.838 00:29:27.838 --- 10.0.0.2 ping statistics --- 00:29:27.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.838 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # return 0 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # return 1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev= 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@160 -- # return 0 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # return 1 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev= 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@160 -- # return 0 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:29:27.838 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2101837 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2101837 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2101837 ']' 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.839 08:26:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:27.839 [2024-11-20 08:26:32.500664] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:29:27.839 [2024-11-20 08:26:32.500726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.099 [2024-11-20 08:26:32.590593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:28.099 [2024-11-20 08:26:32.633355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.099 [2024-11-20 08:26:32.633393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.099 [2024-11-20 08:26:32.633401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.099 [2024-11-20 08:26:32.633411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.099 [2024-11-20 08:26:32.633417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.099 [2024-11-20 08:26:32.635179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.099 [2024-11-20 08:26:32.635295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.099 [2024-11-20 08:26:32.635451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.099 [2024-11-20 08:26:32.635452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:28.672 [2024-11-20 08:26:33.298011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:28.672 Malloc0 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.672 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:28.938 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.938 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:28.938 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.938 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:28.938 [2024-11-20 08:26:33.413935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.938 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.938 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:28.938 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.938 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:28.938 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.938 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:28.938 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.938 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:28.938 [ 00:29:28.938 { 00:29:28.938 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:28.938 "subtype": "Discovery", 00:29:28.938 "listen_addresses": [ 00:29:28.938 { 00:29:28.938 "trtype": "TCP", 00:29:28.938 "adrfam": "IPv4", 00:29:28.938 "traddr": "10.0.0.2", 00:29:28.938 "trsvcid": "4420" 00:29:28.938 } 00:29:28.938 ], 00:29:28.938 "allow_any_host": true, 00:29:28.938 "hosts": [] 00:29:28.938 }, 00:29:28.939 { 00:29:28.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:28.939 "subtype": "NVMe", 00:29:28.939 "listen_addresses": [ 00:29:28.939 { 00:29:28.939 "trtype": "TCP", 00:29:28.939 "adrfam": "IPv4", 00:29:28.939 "traddr": "10.0.0.2", 00:29:28.939 "trsvcid": "4420" 00:29:28.939 } 00:29:28.939 ], 00:29:28.939 "allow_any_host": true, 00:29:28.939 "hosts": [], 00:29:28.939 "serial_number": "SPDK00000000000001", 00:29:28.939 "model_number": "SPDK bdev Controller", 00:29:28.939 "max_namespaces": 32, 00:29:28.939 "min_cntlid": 1, 00:29:28.939 "max_cntlid": 65519, 00:29:28.939 "namespaces": [ 00:29:28.939 { 00:29:28.939 "nsid": 1, 00:29:28.939 "bdev_name": "Malloc0", 00:29:28.939 "name": "Malloc0", 00:29:28.939 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:28.939 "eui64": "ABCDEF0123456789", 00:29:28.939 "uuid": "fbc444a6-6168-4fb8-9417-5a902919994f" 00:29:28.939 } 00:29:28.939 ] 00:29:28.939 } 00:29:28.939 ] 00:29:28.939 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.939 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:28.939 [2024-11-20 08:26:33.479120] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:29:28.939 [2024-11-20 08:26:33.479203] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2102135 ] 00:29:28.939 [2024-11-20 08:26:33.537988] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:28.939 [2024-11-20 08:26:33.538036] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:28.939 [2024-11-20 08:26:33.538041] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:28.939 [2024-11-20 08:26:33.538057] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:28.939 [2024-11-20 08:26:33.538066] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:28.939 [2024-11-20 08:26:33.542191] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:28.939 [2024-11-20 08:26:33.542223] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x14dd550 0 00:29:28.939 [2024-11-20 08:26:33.549874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:28.939 [2024-11-20 08:26:33.549887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:28.939 [2024-11-20 08:26:33.549892] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:28.939 [2024-11-20 08:26:33.549896] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:28.939 [2024-11-20 08:26:33.549928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.549934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.549938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14dd550) 00:29:28.939 [2024-11-20 08:26:33.549952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:28.939 [2024-11-20 08:26:33.549970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f100, cid 0, qid 0 00:29:28.939 [2024-11-20 08:26:33.557876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.939 [2024-11-20 08:26:33.557886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.939 [2024-11-20 08:26:33.557889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.557894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f100) on tqpair=0x14dd550 00:29:28.939 [2024-11-20 08:26:33.557908] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:28.939 [2024-11-20 08:26:33.557915] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:28.939 [2024-11-20 08:26:33.557925] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:28.939 [2024-11-20 08:26:33.557939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.557943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.557947] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14dd550) 00:29:28.939 [2024-11-20 08:26:33.557955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.939 [2024-11-20 08:26:33.557969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f100, cid 0, qid 0 00:29:28.939 [2024-11-20 08:26:33.558159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.939 [2024-11-20 08:26:33.558166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.939 [2024-11-20 08:26:33.558169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.558173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f100) on tqpair=0x14dd550 00:29:28.939 [2024-11-20 08:26:33.558179] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:28.939 [2024-11-20 08:26:33.558187] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:28.939 [2024-11-20 08:26:33.558194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.558198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.558201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14dd550) 00:29:28.939 [2024-11-20 08:26:33.558208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.939 [2024-11-20 08:26:33.558219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f100, cid 0, qid 0 00:29:28.939 [2024-11-20 08:26:33.558406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.939 [2024-11-20 08:26:33.558412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.939 [2024-11-20 08:26:33.558417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.558422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f100) on tqpair=0x14dd550 00:29:28.939 [2024-11-20 08:26:33.558429] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:28.939 [2024-11-20 08:26:33.558437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:28.939 [2024-11-20 08:26:33.558443] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.558448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.558452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14dd550) 00:29:28.939 [2024-11-20 08:26:33.558459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.939 [2024-11-20 08:26:33.558469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f100, cid 0, qid 0 00:29:28.939 [2024-11-20 08:26:33.558645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.939 [2024-11-20 08:26:33.558652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.939 [2024-11-20 08:26:33.558655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.558659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f100) on tqpair=0x14dd550 00:29:28.939 [2024-11-20 08:26:33.558665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:28.939 [2024-11-20 08:26:33.558674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.558682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.558686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14dd550) 00:29:28.939 [2024-11-20 08:26:33.558693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.939 [2024-11-20 08:26:33.558703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f100, cid 0, qid 0 00:29:28.939 [2024-11-20 08:26:33.558884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.939 [2024-11-20 08:26:33.558891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.939 [2024-11-20 08:26:33.558895] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.558899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f100) on tqpair=0x14dd550 00:29:28.939 [2024-11-20 08:26:33.558904] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:28.939 [2024-11-20 08:26:33.558909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:28.939 [2024-11-20 08:26:33.558916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:28.939 [2024-11-20 08:26:33.559024] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:28.939 [2024-11-20 08:26:33.559029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:28.939 [2024-11-20 08:26:33.559037] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.559041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.559045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14dd550) 00:29:28.939 [2024-11-20 08:26:33.559052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.939 [2024-11-20 08:26:33.559063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f100, cid 0, qid 0 00:29:28.939 [2024-11-20 08:26:33.559235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.939 [2024-11-20 08:26:33.559241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.939 [2024-11-20 08:26:33.559245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.939 [2024-11-20 08:26:33.559249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f100) on tqpair=0x14dd550 00:29:28.939 [2024-11-20 08:26:33.559253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:28.939 [2024-11-20 08:26:33.559263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.559267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.559270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14dd550) 00:29:28.940 [2024-11-20 08:26:33.559277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.940 [2024-11-20 08:26:33.559287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f100, cid 0, qid 0 00:29:28.940 [2024-11-20 08:26:33.559490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.940 [2024-11-20 08:26:33.559496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.940 [2024-11-20 08:26:33.559499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.559503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f100) on tqpair=0x14dd550 00:29:28.940 [2024-11-20 08:26:33.559508] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:28.940 [2024-11-20 08:26:33.559517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:28.940 [2024-11-20 08:26:33.559525] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:28.940 [2024-11-20 08:26:33.559532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:28.940 [2024-11-20 08:26:33.559541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.559545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14dd550) 00:29:28.940 [2024-11-20 08:26:33.559551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.940 [2024-11-20 08:26:33.559562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f100, cid 0, qid 0 00:29:28.940 [2024-11-20 08:26:33.559798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:28.940 [2024-11-20 08:26:33.559805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:28.940 [2024-11-20 08:26:33.559809] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.559813] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14dd550): datao=0, datal=4096, cccid=0 00:29:28.940 [2024-11-20 08:26:33.559818] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x153f100) on tqpair(0x14dd550): expected_datao=0, payload_size=4096 00:29:28.940 [2024-11-20 08:26:33.559823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.559831] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.559835] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.559985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.940 [2024-11-20 08:26:33.559992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.940 [2024-11-20 08:26:33.559995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.559999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f100) on tqpair=0x14dd550 00:29:28.940 [2024-11-20 08:26:33.560007] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:28.940 [2024-11-20 08:26:33.560012] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:28.940 [2024-11-20 08:26:33.560017] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:28.940 [2024-11-20 08:26:33.560024] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:28.940 [2024-11-20 08:26:33.560029] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:28.940 [2024-11-20 08:26:33.560035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:28.940 [2024-11-20 08:26:33.560045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:28.940 [2024-11-20 08:26:33.560052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14dd550) 00:29:28.940 [2024-11-20 08:26:33.560066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:28.940 [2024-11-20 08:26:33.560077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f100, cid 0, qid 0 00:29:28.940 [2024-11-20 08:26:33.560260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.940 [2024-11-20 08:26:33.560266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.940 [2024-11-20 08:26:33.560270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f100) on tqpair=0x14dd550 00:29:28.940 [2024-11-20 08:26:33.560282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14dd550) 00:29:28.940 [2024-11-20 08:26:33.560295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:28.940 [2024-11-20 08:26:33.560302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x14dd550) 00:29:28.940 [2024-11-20 08:26:33.560315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:28.940 [2024-11-20 08:26:33.560321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x14dd550) 00:29:28.940 [2024-11-20 08:26:33.560334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:28.940 [2024-11-20 08:26:33.560340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14dd550) 00:29:28.940 [2024-11-20 08:26:33.560353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:28.940 [2024-11-20 08:26:33.560358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:28.940 [2024-11-20 08:26:33.560365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:28.940 [2024-11-20 08:26:33.560372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14dd550) 00:29:28.940 [2024-11-20 08:26:33.560382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.940 [2024-11-20 08:26:33.560394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f100, cid 0, qid 0 00:29:28.940 [2024-11-20 08:26:33.560399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f280, cid 1, qid 0 00:29:28.940 [2024-11-20 08:26:33.560404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f400, cid 2, qid 0 00:29:28.940 [2024-11-20 08:26:33.560409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f580, cid 3, qid 0 00:29:28.940 [2024-11-20 08:26:33.560413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f700, cid 4, qid 0 00:29:28.940 [2024-11-20 08:26:33.560656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.940 [2024-11-20 08:26:33.560663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.940 [2024-11-20 08:26:33.560666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f700) on tqpair=0x14dd550 00:29:28.940 [2024-11-20 08:26:33.560680] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:28.940 [2024-11-20 08:26:33.560685] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:28.940 [2024-11-20 08:26:33.560695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14dd550) 00:29:28.940 [2024-11-20 08:26:33.560706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.940 [2024-11-20 08:26:33.560716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f700, cid 4, qid 0 00:29:28.940 [2024-11-20 08:26:33.560906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:28.940 [2024-11-20 08:26:33.560913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:28.940 [2024-11-20 08:26:33.560916] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560920] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14dd550): datao=0, datal=4096, cccid=4 00:29:28.940 [2024-11-20 08:26:33.560924] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x153f700) on tqpair(0x14dd550): expected_datao=0, payload_size=4096 00:29:28.940 [2024-11-20 08:26:33.560929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560936] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.560939] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.561125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.940 [2024-11-20 08:26:33.561131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.940 [2024-11-20 08:26:33.561135] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.561139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f700) on tqpair=0x14dd550 00:29:28.940 [2024-11-20 08:26:33.561150] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:28.940 [2024-11-20 08:26:33.561171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.940 [2024-11-20 08:26:33.561176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14dd550) 00:29:28.941 [2024-11-20 08:26:33.561182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.941 [2024-11-20 08:26:33.561189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.561193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.561197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14dd550) 00:29:28.941 [2024-11-20 08:26:33.561203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:28.941 [2024-11-20 08:26:33.561217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f700, cid 4, qid 0 00:29:28.941 [2024-11-20 08:26:33.561222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f880, cid 5, qid 0 00:29:28.941 [2024-11-20 08:26:33.561469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:28.941 [2024-11-20 08:26:33.561476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:28.941 [2024-11-20 08:26:33.561479] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.561483] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14dd550): datao=0, datal=1024, cccid=4 00:29:28.941 [2024-11-20 08:26:33.561487] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x153f700) on tqpair(0x14dd550): expected_datao=0, payload_size=1024 00:29:28.941 [2024-11-20 08:26:33.561491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.561500] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.561504] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.561510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.941 [2024-11-20 08:26:33.561515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.941 [2024-11-20 08:26:33.561519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.561523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f880) on tqpair=0x14dd550 00:29:28.941 [2024-11-20 08:26:33.605869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.941 [2024-11-20 08:26:33.605879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.941 [2024-11-20 08:26:33.605883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.605887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f700) on tqpair=0x14dd550 00:29:28.941 [2024-11-20 08:26:33.605898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.605902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14dd550) 00:29:28.941 [2024-11-20 08:26:33.605909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.941 [2024-11-20 08:26:33.605924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f700, cid 4, qid 0 00:29:28.941 [2024-11-20 08:26:33.606121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:28.941 [2024-11-20 08:26:33.606127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:28.941 [2024-11-20 08:26:33.606130] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.606134] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14dd550): datao=0, datal=3072, cccid=4 00:29:28.941 [2024-11-20 08:26:33.606139] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x153f700) on tqpair(0x14dd550): expected_datao=0, payload_size=3072 00:29:28.941 [2024-11-20 08:26:33.606143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.606150] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.606154] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.606308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.941 [2024-11-20 08:26:33.606314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.941 [2024-11-20 08:26:33.606318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.606321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f700) on tqpair=0x14dd550 00:29:28.941 [2024-11-20 08:26:33.606330] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.606334] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14dd550) 00:29:28.941 [2024-11-20 08:26:33.606340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.941 [2024-11-20 08:26:33.606354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f700, cid 4, qid 0 00:29:28.941 [2024-11-20 08:26:33.606589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:28.941 [2024-11-20 08:26:33.606596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:28.941 [2024-11-20 08:26:33.606599] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.606603] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14dd550): datao=0, datal=8, cccid=4 00:29:28.941 [2024-11-20 08:26:33.606607] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x153f700) on tqpair(0x14dd550): expected_datao=0, payload_size=8 00:29:28.941 [2024-11-20 08:26:33.606612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.606618] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.606624] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.647069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.941 [2024-11-20 08:26:33.647081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.941 [2024-11-20 08:26:33.647084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.941 [2024-11-20 08:26:33.647089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f700) on tqpair=0x14dd550 00:29:28.941 ===================================================== 00:29:28.941 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:28.941 ===================================================== 00:29:28.941 Controller Capabilities/Features 00:29:28.941 ================================ 00:29:28.941 Vendor ID: 0000 00:29:28.941 Subsystem Vendor ID: 0000 00:29:28.941 Serial Number: .................... 00:29:28.941 Model Number: ........................................ 00:29:28.941 Firmware Version: 25.01 00:29:28.941 Recommended Arb Burst: 0 00:29:28.941 IEEE OUI Identifier: 00 00 00 00:29:28.941 Multi-path I/O 00:29:28.941 May have multiple subsystem ports: No 00:29:28.941 May have multiple controllers: No 00:29:28.941 Associated with SR-IOV VF: No 00:29:28.941 Max Data Transfer Size: 131072 00:29:28.941 Max Number of Namespaces: 0 00:29:28.941 Max Number of I/O Queues: 1024 00:29:28.941 NVMe Specification Version (VS): 1.3 00:29:28.941 NVMe Specification Version (Identify): 1.3 00:29:28.941 Maximum Queue Entries: 128 00:29:28.941 Contiguous Queues Required: Yes 00:29:28.941 Arbitration Mechanisms Supported 00:29:28.941 Weighted Round Robin: Not Supported 00:29:28.941 Vendor Specific: Not Supported 00:29:28.941 Reset Timeout: 15000 ms 00:29:28.941 Doorbell Stride: 4 bytes 00:29:28.941 NVM Subsystem Reset: Not Supported 00:29:28.941 Command Sets Supported 00:29:28.941 NVM Command Set: Supported 00:29:28.941 Boot Partition: Not Supported 00:29:28.941 Memory Page Size Minimum: 4096 bytes 00:29:28.941 Memory Page Size Maximum: 4096 bytes 00:29:28.941 Persistent Memory Region: Not Supported 00:29:28.941 Optional Asynchronous Events Supported 00:29:28.941 Namespace Attribute Notices: Not Supported 00:29:28.941 Firmware Activation Notices: Not Supported 00:29:28.941 ANA Change Notices: Not Supported 00:29:28.941 PLE Aggregate Log Change Notices: Not Supported 00:29:28.941 LBA Status Info Alert Notices: Not Supported 00:29:28.941 EGE Aggregate Log Change Notices: Not Supported 00:29:28.941 Normal NVM Subsystem Shutdown event: Not Supported 00:29:28.941 Zone Descriptor Change Notices: Not Supported 00:29:28.941 Discovery Log Change Notices: Supported 00:29:28.941 Controller Attributes 00:29:28.941 128-bit Host Identifier: Not Supported 00:29:28.941 Non-Operational Permissive Mode: Not Supported 00:29:28.941 NVM Sets: Not Supported 00:29:28.941 Read Recovery Levels: Not Supported 00:29:28.941 Endurance Groups: Not Supported 00:29:28.941 Predictable Latency Mode: Not Supported 00:29:28.941 Traffic Based Keep ALive: Not Supported 00:29:28.941 Namespace Granularity: Not Supported 00:29:28.941 SQ Associations: Not Supported 00:29:28.941 UUID List: Not Supported 00:29:28.941 Multi-Domain Subsystem: Not Supported 00:29:28.941 Fixed Capacity Management: Not Supported 00:29:28.941 Variable Capacity Management: Not Supported 00:29:28.941 Delete Endurance Group: Not Supported 00:29:28.941 Delete NVM Set: Not Supported 00:29:28.941 Extended LBA Formats Supported: Not Supported 00:29:28.941 Flexible Data Placement Supported: Not Supported 00:29:28.941 00:29:28.941 Controller Memory Buffer Support 00:29:28.941 ================================ 00:29:28.941 Supported: No 00:29:28.941 00:29:28.941 Persistent Memory Region Support 00:29:28.941 ================================ 00:29:28.941 Supported: No 00:29:28.941 00:29:28.941 Admin Command Set Attributes 00:29:28.941 ============================ 00:29:28.941 Security Send/Receive: Not Supported 00:29:28.941 Format NVM: Not Supported 00:29:28.941 Firmware Activate/Download: Not Supported 00:29:28.941 Namespace Management: Not Supported 00:29:28.941 Device Self-Test: Not Supported 00:29:28.941 Directives: Not Supported 00:29:28.941 NVMe-MI: Not Supported 00:29:28.941 Virtualization Management: Not Supported 00:29:28.941 Doorbell Buffer Config: Not Supported 00:29:28.941 Get LBA Status Capability: Not Supported 00:29:28.941 Command & Feature Lockdown Capability: Not Supported 00:29:28.941 Abort Command Limit: 1 00:29:28.941 Async Event Request Limit: 4 00:29:28.941 Number of Firmware Slots: N/A 00:29:28.941 Firmware Slot 1 Read-Only: N/A 00:29:28.941 Firmware Activation Without Reset: N/A 00:29:28.941 Multiple Update Detection Support: N/A 00:29:28.942 Firmware Update Granularity: No Information Provided 00:29:28.942 Per-Namespace SMART Log: No 00:29:28.942 Asymmetric Namespace Access Log Page: Not Supported 00:29:28.942 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:28.942 Command Effects Log Page: Not Supported 00:29:28.942 Get Log Page Extended Data: Supported 00:29:28.942 Telemetry Log Pages: Not Supported 00:29:28.942 Persistent Event Log Pages: Not Supported 00:29:28.942 Supported Log Pages Log Page: May Support 00:29:28.942 Commands Supported & Effects Log Page: Not Supported 00:29:28.942 Feature Identifiers & Effects Log Page:May Support 00:29:28.942 NVMe-MI Commands & Effects Log Page: May Support 00:29:28.942 Data Area 4 for Telemetry Log: Not Supported 00:29:28.942 Error Log Page Entries Supported: 128 00:29:28.942 Keep Alive: Not Supported 00:29:28.942 00:29:28.942 NVM Command Set Attributes 00:29:28.942 ========================== 00:29:28.942 Submission Queue Entry Size 00:29:28.942 Max: 1 00:29:28.942 Min: 1 00:29:28.942 Completion Queue Entry Size 00:29:28.942 Max: 1 00:29:28.942 Min: 1 00:29:28.942 Number of Namespaces: 0 00:29:28.942 Compare Command: Not Supported 00:29:28.942 Write Uncorrectable Command: Not Supported 00:29:28.942 Dataset Management Command: Not Supported 00:29:28.942 Write Zeroes Command: Not Supported 00:29:28.942 Set Features Save Field: Not Supported 00:29:28.942 Reservations: Not Supported 00:29:28.942 Timestamp: Not Supported 00:29:28.942 Copy: Not Supported 00:29:28.942 Volatile Write Cache: Not Present 00:29:28.942 Atomic Write Unit (Normal): 1 00:29:28.942 Atomic Write Unit (PFail): 1 00:29:28.942 Atomic Compare & Write Unit: 1 00:29:28.942 Fused Compare & Write: Supported 00:29:28.942 Scatter-Gather List 00:29:28.942 SGL Command Set: Supported 00:29:28.942 SGL Keyed: Supported 00:29:28.942 SGL Bit Bucket Descriptor: Not Supported 00:29:28.942 SGL Metadata Pointer: Not Supported 00:29:28.942 Oversized SGL: Not Supported 00:29:28.942 SGL Metadata Address: Not Supported 00:29:28.942 SGL Offset: Supported 00:29:28.942 Transport SGL Data Block: Not Supported 00:29:28.942 Replay Protected Memory Block: Not Supported 00:29:28.942 00:29:28.942 Firmware Slot Information 00:29:28.942 ========================= 00:29:28.942 Active slot: 0 00:29:28.942 00:29:28.942 00:29:28.942 Error Log 00:29:28.942 ========= 00:29:28.942 00:29:28.942 Active Namespaces 00:29:28.942 ================= 00:29:28.942 Discovery Log Page 00:29:28.942 ================== 00:29:28.942 Generation Counter: 2 00:29:28.942 Number of Records: 2 00:29:28.942 Record Format: 0 00:29:28.942 00:29:28.942 Discovery Log Entry 0 00:29:28.942 ---------------------- 00:29:28.942 Transport Type: 3 (TCP) 00:29:28.942 Address Family: 1 (IPv4) 00:29:28.942 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:28.942 Entry Flags: 00:29:28.942 Duplicate Returned Information: 1 00:29:28.942 Explicit Persistent Connection Support for Discovery: 1 00:29:28.942 Transport Requirements: 00:29:28.942 Secure Channel: Not Required 00:29:28.942 Port ID: 0 (0x0000) 00:29:28.942 Controller ID: 65535 (0xffff) 00:29:28.942 Admin Max SQ Size: 128 00:29:28.942 Transport Service Identifier: 4420 00:29:28.942 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:28.942 Transport Address: 10.0.0.2 00:29:28.942 Discovery Log Entry 1 00:29:28.942 ---------------------- 00:29:28.942 Transport Type: 3 (TCP) 00:29:28.942 Address Family: 1 (IPv4) 00:29:28.942 Subsystem Type: 2 (NVM Subsystem) 00:29:28.942 Entry Flags: 00:29:28.942 Duplicate Returned Information: 0 00:29:28.942 Explicit Persistent Connection Support for Discovery: 0 00:29:28.942 Transport Requirements: 00:29:28.942 Secure Channel: Not Required 00:29:28.942 Port ID: 0 (0x0000) 00:29:28.942 Controller ID: 65535 (0xffff) 00:29:28.942 Admin Max SQ Size: 128 00:29:28.942 Transport Service Identifier: 4420 00:29:28.942 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:28.942 Transport Address: 10.0.0.2 [2024-11-20 08:26:33.647174] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:28.942 [2024-11-20 08:26:33.647186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f100) on tqpair=0x14dd550 00:29:28.942 [2024-11-20 08:26:33.647193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.942 [2024-11-20 08:26:33.647198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f280) on tqpair=0x14dd550 00:29:28.942 [2024-11-20 08:26:33.647203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.942 [2024-11-20 08:26:33.647208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f400) on tqpair=0x14dd550 00:29:28.942 [2024-11-20 08:26:33.647213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.942 [2024-11-20 08:26:33.647218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f580) on tqpair=0x14dd550 00:29:28.942 [2024-11-20 08:26:33.647223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.942 [2024-11-20 08:26:33.647234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.942 [2024-11-20 08:26:33.647238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.942 [2024-11-20 08:26:33.647242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14dd550) 00:29:28.942 [2024-11-20 08:26:33.647249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.942 [2024-11-20 08:26:33.647263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f580, cid 3, qid 0 00:29:28.942 [2024-11-20 08:26:33.647346] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.942 [2024-11-20 08:26:33.647353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.942 [2024-11-20 08:26:33.647357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.942 [2024-11-20 08:26:33.647360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f580) on tqpair=0x14dd550 00:29:28.942 [2024-11-20 08:26:33.647368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.942 [2024-11-20 08:26:33.647372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.942 [2024-11-20 08:26:33.647375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14dd550) 00:29:28.942 [2024-11-20 08:26:33.647382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.942 [2024-11-20 08:26:33.647396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f580, cid 3, qid 0 00:29:28.942 [2024-11-20 08:26:33.647576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.942 [2024-11-20 08:26:33.647582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.942 [2024-11-20 08:26:33.647585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.942 [2024-11-20 08:26:33.647589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f580) on tqpair=0x14dd550 00:29:28.942 [2024-11-20 08:26:33.647594] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:28.942 [2024-11-20 08:26:33.647599] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:28.942 [2024-11-20 08:26:33.647612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.942 [2024-11-20 08:26:33.647616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.942 [2024-11-20 08:26:33.647620] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14dd550) 00:29:28.942 [2024-11-20 08:26:33.647627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.942 [2024-11-20 08:26:33.647638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f580, cid 3, qid 0 00:29:28.942 [2024-11-20 08:26:33.647842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.942 [2024-11-20 08:26:33.647848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.942 [2024-11-20 08:26:33.647851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.942 [2024-11-20 08:26:33.647855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f580) on tqpair=0x14dd550 00:29:28.942 [2024-11-20 08:26:33.647871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.942 [2024-11-20 08:26:33.647875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.942 [2024-11-20 08:26:33.647879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14dd550) 00:29:28.942 [2024-11-20 08:26:33.647886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.942 [2024-11-20 08:26:33.647896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f580, cid 3, qid 0 00:29:28.942 [2024-11-20 08:26:33.648110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.942 [2024-11-20 08:26:33.648116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.942 [2024-11-20 08:26:33.648120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.942 [2024-11-20 08:26:33.648124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f580) on tqpair=0x14dd550 00:29:28.942 [2024-11-20 08:26:33.648134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.942 [2024-11-20 08:26:33.648138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.942 [2024-11-20 08:26:33.648141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14dd550) 00:29:28.942 [2024-11-20 08:26:33.648148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.942 [2024-11-20 08:26:33.648159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f580, cid 3, qid 0 00:29:28.943 [2024-11-20 08:26:33.648336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.943 [2024-11-20 08:26:33.648343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.943 [2024-11-20 08:26:33.648346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.648350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f580) on tqpair=0x14dd550 00:29:28.943 [2024-11-20 08:26:33.648360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.648364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.648367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14dd550) 00:29:28.943 [2024-11-20 08:26:33.648374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.943 [2024-11-20 08:26:33.648384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f580, cid 3, qid 0 00:29:28.943 [2024-11-20 08:26:33.648603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.943 [2024-11-20 08:26:33.648609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.943 [2024-11-20 08:26:33.648613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.648617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f580) on tqpair=0x14dd550 00:29:28.943 [2024-11-20 08:26:33.648626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.648633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.648636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14dd550) 00:29:28.943 [2024-11-20 08:26:33.648643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.943 [2024-11-20 08:26:33.648653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f580, cid 3, qid 0 00:29:28.943 [2024-11-20 08:26:33.648820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.943 [2024-11-20 08:26:33.648827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.943 [2024-11-20 08:26:33.648830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.648834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f580) on tqpair=0x14dd550 00:29:28.943 [2024-11-20 08:26:33.648844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.648848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.648852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14dd550) 00:29:28.943 [2024-11-20 08:26:33.648859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.943 [2024-11-20 08:26:33.648873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f580, cid 3, qid 0 00:29:28.943 [2024-11-20 08:26:33.649074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.943 [2024-11-20 08:26:33.649080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.943 [2024-11-20 08:26:33.649084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.649087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f580) on tqpair=0x14dd550 00:29:28.943 [2024-11-20 08:26:33.649097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.649101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.649105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14dd550) 00:29:28.943 [2024-11-20 08:26:33.649112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.943 [2024-11-20 08:26:33.649121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f580, cid 3, qid 0 00:29:28.943 [2024-11-20 08:26:33.649300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.943 [2024-11-20 08:26:33.649306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.943 [2024-11-20 08:26:33.649310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.649314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f580) on tqpair=0x14dd550 00:29:28.943 [2024-11-20 08:26:33.649323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.649327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.649331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14dd550) 00:29:28.943 [2024-11-20 08:26:33.649338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.943 [2024-11-20 08:26:33.649348] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f580, cid 3, qid 0 00:29:28.943 [2024-11-20 08:26:33.649570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.943 [2024-11-20 08:26:33.649576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.943 [2024-11-20 08:26:33.649579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.649583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f580) on tqpair=0x14dd550 00:29:28.943 [2024-11-20 08:26:33.649593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.649597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.649602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14dd550) 00:29:28.943 [2024-11-20 08:26:33.649609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.943 [2024-11-20 08:26:33.649619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f580, cid 3, qid 0 00:29:28.943 [2024-11-20 08:26:33.649817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.943 [2024-11-20 08:26:33.649824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.943 [2024-11-20 08:26:33.649827] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.649831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f580) on tqpair=0x14dd550 00:29:28.943 [2024-11-20 08:26:33.649841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.649845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.649849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14dd550) 00:29:28.943 [2024-11-20 08:26:33.649855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.943 [2024-11-20 08:26:33.653871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f580, cid 3, qid 0 00:29:28.943 [2024-11-20 08:26:33.654029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:28.943 [2024-11-20 08:26:33.654035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:28.943 [2024-11-20 08:26:33.654039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:28.943 [2024-11-20 08:26:33.654043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f580) on tqpair=0x14dd550 00:29:28.943 [2024-11-20 08:26:33.654050] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:29:29.210 00:29:29.210 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:29.210 [2024-11-20 08:26:33.692946] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:29:29.210 [2024-11-20 08:26:33.692988] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2102195 ] 00:29:29.210 [2024-11-20 08:26:33.746940] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:29.210 [2024-11-20 08:26:33.746989] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:29.210 [2024-11-20 08:26:33.746995] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:29.210 [2024-11-20 08:26:33.747006] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:29.210 [2024-11-20 08:26:33.747015] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:29.210 [2024-11-20 08:26:33.751083] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:29.210 [2024-11-20 08:26:33.751112] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x229b550 0 00:29:29.210 [2024-11-20 08:26:33.758872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:29.210 [2024-11-20 08:26:33.758883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:29.210 [2024-11-20 08:26:33.758888] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:29.210 [2024-11-20 08:26:33.758891] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:29.210 [2024-11-20 08:26:33.758922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.210 [2024-11-20 08:26:33.758928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.210 [2024-11-20 08:26:33.758932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b550) 00:29:29.210 [2024-11-20 08:26:33.758944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:29.210 [2024-11-20 08:26:33.758962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:29:29.210 [2024-11-20 08:26:33.765871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.210 [2024-11-20 08:26:33.765880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.210 [2024-11-20 08:26:33.765884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.210 [2024-11-20 08:26:33.765889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b550 00:29:29.210 [2024-11-20 08:26:33.765898] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:29.210 [2024-11-20 08:26:33.765904] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:29.210 [2024-11-20 08:26:33.765910] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:29.210 [2024-11-20 08:26:33.765922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.210 [2024-11-20 08:26:33.765926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.210 [2024-11-20 08:26:33.765930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b550) 00:29:29.210 [2024-11-20 08:26:33.765937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.210 [2024-11-20 08:26:33.765951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:29:29.210 [2024-11-20 08:26:33.766134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.210 [2024-11-20 08:26:33.766141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.210 [2024-11-20 08:26:33.766144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.210 [2024-11-20 08:26:33.766148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b550 00:29:29.210 [2024-11-20 08:26:33.766153] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:29.210 [2024-11-20 08:26:33.766161] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:29.210 [2024-11-20 08:26:33.766168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.210 [2024-11-20 08:26:33.766171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.210 [2024-11-20 08:26:33.766175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b550) 00:29:29.210 [2024-11-20 08:26:33.766182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.210 [2024-11-20 08:26:33.766192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:29:29.210 [2024-11-20 08:26:33.766382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.211 [2024-11-20 08:26:33.766388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.211 [2024-11-20 08:26:33.766392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.766395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b550 00:29:29.211 [2024-11-20 08:26:33.766401] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:29.211 [2024-11-20 08:26:33.766409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:29.211 [2024-11-20 08:26:33.766418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.766422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.766425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b550) 00:29:29.211 [2024-11-20 08:26:33.766432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.211 [2024-11-20 08:26:33.766443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:29:29.211 [2024-11-20 08:26:33.766650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.211 [2024-11-20 08:26:33.766657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.211 [2024-11-20 08:26:33.766660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.766664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b550 00:29:29.211 [2024-11-20 08:26:33.766669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:29.211 [2024-11-20 08:26:33.766678] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.766682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.766686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b550) 00:29:29.211 [2024-11-20 08:26:33.766693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.211 [2024-11-20 08:26:33.766703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:29:29.211 [2024-11-20 08:26:33.766898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.211 [2024-11-20 08:26:33.766905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.211 [2024-11-20 08:26:33.766909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.766913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b550 00:29:29.211 [2024-11-20 08:26:33.766917] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:29.211 [2024-11-20 08:26:33.766922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:29.211 [2024-11-20 08:26:33.766930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:29.211 [2024-11-20 08:26:33.767038] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:29.211 [2024-11-20 08:26:33.767043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:29.211 [2024-11-20 08:26:33.767051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.767054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.767058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b550) 00:29:29.211 [2024-11-20 08:26:33.767065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.211 [2024-11-20 08:26:33.767076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:29:29.211 [2024-11-20 08:26:33.767242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.211 [2024-11-20 08:26:33.767248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.211 [2024-11-20 08:26:33.767252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.767255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b550 00:29:29.211 [2024-11-20 08:26:33.767260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:29.211 [2024-11-20 08:26:33.767271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.767275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.767279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b550) 00:29:29.211 [2024-11-20 08:26:33.767286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.211 [2024-11-20 08:26:33.767296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:29:29.211 [2024-11-20 08:26:33.767517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.211 [2024-11-20 08:26:33.767523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.211 [2024-11-20 08:26:33.767526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.767530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b550 00:29:29.211 [2024-11-20 08:26:33.767535] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:29.211 [2024-11-20 08:26:33.767539] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:29.211 [2024-11-20 08:26:33.767547] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:29.211 [2024-11-20 08:26:33.767557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:29.211 [2024-11-20 08:26:33.767565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.767569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b550) 00:29:29.211 [2024-11-20 08:26:33.767576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.211 [2024-11-20 08:26:33.767587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:29:29.211 [2024-11-20 08:26:33.767804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:29.211 [2024-11-20 08:26:33.767811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:29.211 [2024-11-20 08:26:33.767815] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.767819] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229b550): datao=0, datal=4096, cccid=0 00:29:29.211 [2024-11-20 08:26:33.767824] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fd100) on tqpair(0x229b550): expected_datao=0, payload_size=4096 00:29:29.211 [2024-11-20 08:26:33.767828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.767836] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.767839] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.768011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.211 [2024-11-20 08:26:33.768018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.211 [2024-11-20 08:26:33.768022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.768026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b550 00:29:29.211 [2024-11-20 08:26:33.768033] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:29.211 [2024-11-20 08:26:33.768037] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:29.211 [2024-11-20 08:26:33.768042] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:29.211 [2024-11-20 08:26:33.768049] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:29.211 [2024-11-20 08:26:33.768058] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:29.211 [2024-11-20 08:26:33.768063] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:29.211 [2024-11-20 08:26:33.768073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:29.211 [2024-11-20 08:26:33.768079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.768083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.768087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b550) 00:29:29.211 [2024-11-20 08:26:33.768094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:29.211 [2024-11-20 08:26:33.768105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:29:29.211 [2024-11-20 08:26:33.768280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.211 [2024-11-20 08:26:33.768287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.211 [2024-11-20 08:26:33.768290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.768294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b550 00:29:29.211 [2024-11-20 08:26:33.768301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.768305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.768308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229b550) 00:29:29.211 [2024-11-20 08:26:33.768314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.211 [2024-11-20 08:26:33.768321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.768324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.211 [2024-11-20 08:26:33.768328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x229b550) 00:29:29.211 [2024-11-20 08:26:33.768334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.212 [2024-11-20 08:26:33.768340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.768344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.768347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x229b550) 00:29:29.212 [2024-11-20 08:26:33.768353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.212 [2024-11-20 08:26:33.768359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.768363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.768366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229b550) 00:29:29.212 [2024-11-20 08:26:33.768372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.212 [2024-11-20 08:26:33.768377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:29.212 [2024-11-20 08:26:33.768385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:29.212 [2024-11-20 08:26:33.768391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.768395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229b550) 00:29:29.212 [2024-11-20 08:26:33.768402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.212 [2024-11-20 08:26:33.768415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd100, cid 0, qid 0 00:29:29.212 [2024-11-20 08:26:33.768421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd280, cid 1, qid 0 00:29:29.212 [2024-11-20 08:26:33.768426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd400, cid 2, qid 0 00:29:29.212 [2024-11-20 08:26:33.768431] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd580, cid 3, qid 0 00:29:29.212 [2024-11-20 08:26:33.768435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd700, cid 4, qid 0 00:29:29.212 [2024-11-20 08:26:33.768628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.212 [2024-11-20 08:26:33.768635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.212 [2024-11-20 08:26:33.768638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.768642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd700) on tqpair=0x229b550 00:29:29.212 [2024-11-20 08:26:33.768650] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:29.212 [2024-11-20 08:26:33.768655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:29.212 [2024-11-20 08:26:33.768663] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:29.212 [2024-11-20 08:26:33.768669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:29.212 [2024-11-20 08:26:33.768675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.768679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.768683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229b550) 00:29:29.212 [2024-11-20 08:26:33.768689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:29.212 [2024-11-20 08:26:33.768700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd700, cid 4, qid 0 00:29:29.212 [2024-11-20 08:26:33.768872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.212 [2024-11-20 08:26:33.768879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.212 [2024-11-20 08:26:33.768882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.768886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd700) on tqpair=0x229b550 00:29:29.212 [2024-11-20 08:26:33.768951] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:29.212 [2024-11-20 08:26:33.768960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:29.212 [2024-11-20 08:26:33.768967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.768971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229b550) 00:29:29.212 [2024-11-20 08:26:33.768977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.212 [2024-11-20 08:26:33.768988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd700, cid 4, qid 0 00:29:29.212 [2024-11-20 08:26:33.769163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:29.212 [2024-11-20 08:26:33.769170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:29.212 [2024-11-20 08:26:33.769173] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.769177] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229b550): datao=0, datal=4096, cccid=4 00:29:29.212 [2024-11-20 08:26:33.769182] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fd700) on tqpair(0x229b550): expected_datao=0, payload_size=4096 00:29:29.212 [2024-11-20 08:26:33.769188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.769218] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.769222] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.769405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.212 [2024-11-20 08:26:33.769411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.212 [2024-11-20 08:26:33.769415] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.769419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd700) on tqpair=0x229b550 00:29:29.212 [2024-11-20 08:26:33.769428] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:29.212 [2024-11-20 08:26:33.769441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:29.212 [2024-11-20 08:26:33.769450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:29.212 [2024-11-20 08:26:33.769457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.769461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229b550) 00:29:29.212 [2024-11-20 08:26:33.769467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.212 [2024-11-20 08:26:33.769478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd700, cid 4, qid 0 00:29:29.212 [2024-11-20 08:26:33.769702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:29.212 [2024-11-20 08:26:33.769708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:29.212 [2024-11-20 08:26:33.769712] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.769716] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229b550): datao=0, datal=4096, cccid=4 00:29:29.212 [2024-11-20 08:26:33.769720] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fd700) on tqpair(0x229b550): expected_datao=0, payload_size=4096 00:29:29.212 [2024-11-20 08:26:33.769724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.769767] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.769771] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.773869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.212 [2024-11-20 08:26:33.773877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.212 [2024-11-20 08:26:33.773880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.773884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd700) on tqpair=0x229b550 00:29:29.212 [2024-11-20 08:26:33.773897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:29.212 [2024-11-20 08:26:33.773906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:29.212 [2024-11-20 08:26:33.773913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.773917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229b550) 00:29:29.212 [2024-11-20 08:26:33.773924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.212 [2024-11-20 08:26:33.773936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd700, cid 4, qid 0 00:29:29.212 [2024-11-20 08:26:33.774106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:29.212 [2024-11-20 08:26:33.774112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:29.212 [2024-11-20 08:26:33.774118] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.774122] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229b550): datao=0, datal=4096, cccid=4 00:29:29.212 [2024-11-20 08:26:33.774126] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fd700) on tqpair(0x229b550): expected_datao=0, payload_size=4096 00:29:29.212 [2024-11-20 08:26:33.774131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.774147] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.774151] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.774337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.212 [2024-11-20 08:26:33.774343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.212 [2024-11-20 08:26:33.774347] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.212 [2024-11-20 08:26:33.774350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd700) on tqpair=0x229b550 00:29:29.212 [2024-11-20 08:26:33.774358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:29.213 [2024-11-20 08:26:33.774366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:29.213 [2024-11-20 08:26:33.774374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:29.213 [2024-11-20 08:26:33.774380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:29.213 [2024-11-20 08:26:33.774385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:29.213 [2024-11-20 08:26:33.774391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:29.213 [2024-11-20 08:26:33.774396] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:29.213 [2024-11-20 08:26:33.774401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:29.213 [2024-11-20 08:26:33.774406] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:29.213 [2024-11-20 08:26:33.774419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.774423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229b550) 00:29:29.213 [2024-11-20 08:26:33.774430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-11-20 08:26:33.774437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.774440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.774444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x229b550) 00:29:29.213 [2024-11-20 08:26:33.774450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.213 [2024-11-20 08:26:33.774463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd700, cid 4, qid 0 00:29:29.213 [2024-11-20 08:26:33.774469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd880, cid 5, qid 0 00:29:29.213 [2024-11-20 08:26:33.774669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.213 [2024-11-20 08:26:33.774675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.213 [2024-11-20 08:26:33.774679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.774683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd700) on tqpair=0x229b550 00:29:29.213 [2024-11-20 08:26:33.774691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.213 [2024-11-20 08:26:33.774698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.213 [2024-11-20 08:26:33.774701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.774705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd880) on tqpair=0x229b550 00:29:29.213 [2024-11-20 08:26:33.774714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.774718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x229b550) 00:29:29.213 [2024-11-20 08:26:33.774724] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-11-20 08:26:33.774734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd880, cid 5, qid 0 00:29:29.213 [2024-11-20 08:26:33.774906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.213 [2024-11-20 08:26:33.774913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.213 [2024-11-20 08:26:33.774916] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.774920] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd880) on tqpair=0x229b550 00:29:29.213 [2024-11-20 08:26:33.774929] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.774933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x229b550) 00:29:29.213 [2024-11-20 08:26:33.774939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-11-20 08:26:33.774950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd880, cid 5, qid 0 00:29:29.213 [2024-11-20 08:26:33.775141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.213 [2024-11-20 08:26:33.775148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.213 [2024-11-20 08:26:33.775151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd880) on tqpair=0x229b550 00:29:29.213 [2024-11-20 08:26:33.775164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x229b550) 00:29:29.213 [2024-11-20 08:26:33.775174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-11-20 08:26:33.775184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd880, cid 5, qid 0 00:29:29.213 [2024-11-20 08:26:33.775401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.213 [2024-11-20 08:26:33.775407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.213 [2024-11-20 08:26:33.775411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd880) on tqpair=0x229b550 00:29:29.213 [2024-11-20 08:26:33.775428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x229b550) 00:29:29.213 [2024-11-20 08:26:33.775439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-11-20 08:26:33.775446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775450] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229b550) 00:29:29.213 [2024-11-20 08:26:33.775456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-11-20 08:26:33.775467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775471] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x229b550) 00:29:29.213 [2024-11-20 08:26:33.775477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-11-20 08:26:33.775485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x229b550) 00:29:29.213 [2024-11-20 08:26:33.775494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-11-20 08:26:33.775506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd880, cid 5, qid 0 00:29:29.213 [2024-11-20 08:26:33.775511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd700, cid 4, qid 0 00:29:29.213 [2024-11-20 08:26:33.775516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fda00, cid 6, qid 0 00:29:29.213 [2024-11-20 08:26:33.775521] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fdb80, cid 7, qid 0 00:29:29.213 [2024-11-20 08:26:33.775802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:29.213 [2024-11-20 08:26:33.775808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:29.213 [2024-11-20 08:26:33.775812] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775815] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229b550): datao=0, datal=8192, cccid=5 00:29:29.213 [2024-11-20 08:26:33.775820] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fd880) on tqpair(0x229b550): expected_datao=0, payload_size=8192 00:29:29.213 [2024-11-20 08:26:33.775824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775857] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775866] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:29.213 [2024-11-20 08:26:33.775878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:29.213 [2024-11-20 08:26:33.775882] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775886] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229b550): datao=0, datal=512, cccid=4 00:29:29.213 [2024-11-20 08:26:33.775890] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fd700) on tqpair(0x229b550): expected_datao=0, payload_size=512 00:29:29.213 [2024-11-20 08:26:33.775894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775901] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775905] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:29.213 [2024-11-20 08:26:33.775916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:29.213 [2024-11-20 08:26:33.775920] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775923] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229b550): datao=0, datal=512, cccid=6 00:29:29.213 [2024-11-20 08:26:33.775928] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fda00) on tqpair(0x229b550): expected_datao=0, payload_size=512 00:29:29.213 [2024-11-20 08:26:33.775932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775938] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775942] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:29.213 [2024-11-20 08:26:33.775948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:29.213 [2024-11-20 08:26:33.775953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:29.213 [2024-11-20 08:26:33.775959] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:29.214 [2024-11-20 08:26:33.775962] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229b550): datao=0, datal=4096, cccid=7 00:29:29.214 [2024-11-20 08:26:33.775967] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fdb80) on tqpair(0x229b550): expected_datao=0, payload_size=4096 00:29:29.214 [2024-11-20 08:26:33.775971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.214 [2024-11-20 08:26:33.775983] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:29.214 [2024-11-20 08:26:33.775986] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:29.214 [2024-11-20 08:26:33.776001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.214 [2024-11-20 08:26:33.776007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.214 [2024-11-20 08:26:33.776010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.214 [2024-11-20 08:26:33.776014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd880) on tqpair=0x229b550 00:29:29.214 [2024-11-20 08:26:33.776026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.214 [2024-11-20 08:26:33.776032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.214 [2024-11-20 08:26:33.776036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.214 [2024-11-20 08:26:33.776040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd700) on tqpair=0x229b550 00:29:29.214 [2024-11-20 08:26:33.776050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.214 [2024-11-20 08:26:33.776056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.214 [2024-11-20 08:26:33.776059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.214 [2024-11-20 08:26:33.776063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fda00) on tqpair=0x229b550 00:29:29.214 [2024-11-20 08:26:33.776071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.214 [2024-11-20 08:26:33.776076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.214 [2024-11-20 08:26:33.776080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.214 [2024-11-20 08:26:33.776084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fdb80) on tqpair=0x229b550 00:29:29.214 ===================================================== 00:29:29.214 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:29.214 ===================================================== 00:29:29.214 Controller Capabilities/Features 00:29:29.214 ================================ 00:29:29.214 Vendor ID: 8086 00:29:29.214 Subsystem Vendor ID: 8086 00:29:29.214 Serial Number: SPDK00000000000001 00:29:29.214 Model Number: SPDK bdev Controller 00:29:29.214 Firmware Version: 25.01 00:29:29.214 Recommended Arb Burst: 6 00:29:29.214 IEEE OUI Identifier: e4 d2 5c 00:29:29.214 Multi-path I/O 00:29:29.214 May have multiple subsystem ports: Yes 00:29:29.214 May have multiple controllers: Yes 00:29:29.214 Associated with SR-IOV VF: No 00:29:29.214 Max Data Transfer Size: 131072 00:29:29.214 Max Number of Namespaces: 32 00:29:29.214 Max Number of I/O Queues: 127 00:29:29.214 NVMe Specification Version (VS): 1.3 00:29:29.214 NVMe Specification Version (Identify): 1.3 00:29:29.214 Maximum Queue Entries: 128 00:29:29.214 Contiguous Queues Required: Yes 00:29:29.214 Arbitration Mechanisms Supported 00:29:29.214 Weighted Round Robin: Not Supported 00:29:29.214 Vendor Specific: Not Supported 00:29:29.214 Reset Timeout: 15000 ms 00:29:29.214 Doorbell Stride: 4 bytes 00:29:29.214 NVM Subsystem Reset: Not Supported 00:29:29.214 Command Sets Supported 00:29:29.214 NVM Command Set: Supported 00:29:29.214 Boot Partition: Not Supported 00:29:29.214 Memory Page Size Minimum: 4096 bytes 00:29:29.214 Memory Page Size Maximum: 4096 bytes 00:29:29.214 Persistent Memory Region: Not Supported 00:29:29.214 Optional Asynchronous Events Supported 00:29:29.214 Namespace Attribute Notices: Supported 00:29:29.214 Firmware Activation Notices: Not Supported 00:29:29.214 ANA Change Notices: Not Supported 00:29:29.214 PLE Aggregate Log Change Notices: Not Supported 00:29:29.214 LBA Status Info Alert Notices: Not Supported 00:29:29.214 EGE Aggregate Log Change Notices: Not Supported 00:29:29.214 Normal NVM Subsystem Shutdown event: Not Supported 00:29:29.214 Zone Descriptor Change Notices: Not Supported 00:29:29.214 Discovery Log Change Notices: Not Supported 00:29:29.214 Controller Attributes 00:29:29.214 128-bit Host Identifier: Supported 00:29:29.214 Non-Operational Permissive Mode: Not Supported 00:29:29.214 NVM Sets: Not Supported 00:29:29.214 Read Recovery Levels: Not Supported 00:29:29.214 Endurance Groups: Not Supported 00:29:29.214 Predictable Latency Mode: Not Supported 00:29:29.214 Traffic Based Keep ALive: Not Supported 00:29:29.214 Namespace Granularity: Not Supported 00:29:29.214 SQ Associations: Not Supported 00:29:29.214 UUID List: Not Supported 00:29:29.214 Multi-Domain Subsystem: Not Supported 00:29:29.214 Fixed Capacity Management: Not Supported 00:29:29.214 Variable Capacity Management: Not Supported 00:29:29.214 Delete Endurance Group: Not Supported 00:29:29.214 Delete NVM Set: Not Supported 00:29:29.214 Extended LBA Formats Supported: Not Supported 00:29:29.214 Flexible Data Placement Supported: Not Supported 00:29:29.214 00:29:29.214 Controller Memory Buffer Support 00:29:29.214 ================================ 00:29:29.214 Supported: No 00:29:29.214 00:29:29.214 Persistent Memory Region Support 00:29:29.214 ================================ 00:29:29.214 Supported: No 00:29:29.214 00:29:29.214 Admin Command Set Attributes 00:29:29.214 ============================ 00:29:29.214 Security Send/Receive: Not Supported 00:29:29.214 Format NVM: Not Supported 00:29:29.214 Firmware Activate/Download: Not Supported 00:29:29.214 Namespace Management: Not Supported 00:29:29.214 Device Self-Test: Not Supported 00:29:29.214 Directives: Not Supported 00:29:29.214 NVMe-MI: Not Supported 00:29:29.214 Virtualization Management: Not Supported 00:29:29.214 Doorbell Buffer Config: Not Supported 00:29:29.214 Get LBA Status Capability: Not Supported 00:29:29.214 Command & Feature Lockdown Capability: Not Supported 00:29:29.214 Abort Command Limit: 4 00:29:29.214 Async Event Request Limit: 4 00:29:29.214 Number of Firmware Slots: N/A 00:29:29.214 Firmware Slot 1 Read-Only: N/A 00:29:29.214 Firmware Activation Without Reset: N/A 00:29:29.214 Multiple Update Detection Support: N/A 00:29:29.214 Firmware Update Granularity: No Information Provided 00:29:29.214 Per-Namespace SMART Log: No 00:29:29.214 Asymmetric Namespace Access Log Page: Not Supported 00:29:29.214 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:29.214 Command Effects Log Page: Supported 00:29:29.214 Get Log Page Extended Data: Supported 00:29:29.214 Telemetry Log Pages: Not Supported 00:29:29.214 Persistent Event Log Pages: Not Supported 00:29:29.214 Supported Log Pages Log Page: May Support 00:29:29.214 Commands Supported & Effects Log Page: Not Supported 00:29:29.214 Feature Identifiers & Effects Log Page:May Support 00:29:29.214 NVMe-MI Commands & Effects Log Page: May Support 00:29:29.214 Data Area 4 for Telemetry Log: Not Supported 00:29:29.214 Error Log Page Entries Supported: 128 00:29:29.214 Keep Alive: Supported 00:29:29.214 Keep Alive Granularity: 10000 ms 00:29:29.214 00:29:29.214 NVM Command Set Attributes 00:29:29.214 ========================== 00:29:29.214 Submission Queue Entry Size 00:29:29.214 Max: 64 00:29:29.214 Min: 64 00:29:29.214 Completion Queue Entry Size 00:29:29.214 Max: 16 00:29:29.214 Min: 16 00:29:29.214 Number of Namespaces: 32 00:29:29.214 Compare Command: Supported 00:29:29.214 Write Uncorrectable Command: Not Supported 00:29:29.214 Dataset Management Command: Supported 00:29:29.214 Write Zeroes Command: Supported 00:29:29.214 Set Features Save Field: Not Supported 00:29:29.214 Reservations: Supported 00:29:29.214 Timestamp: Not Supported 00:29:29.214 Copy: Supported 00:29:29.214 Volatile Write Cache: Present 00:29:29.214 Atomic Write Unit (Normal): 1 00:29:29.214 Atomic Write Unit (PFail): 1 00:29:29.214 Atomic Compare & Write Unit: 1 00:29:29.214 Fused Compare & Write: Supported 00:29:29.215 Scatter-Gather List 00:29:29.215 SGL Command Set: Supported 00:29:29.215 SGL Keyed: Supported 00:29:29.215 SGL Bit Bucket Descriptor: Not Supported 00:29:29.215 SGL Metadata Pointer: Not Supported 00:29:29.215 Oversized SGL: Not Supported 00:29:29.215 SGL Metadata Address: Not Supported 00:29:29.215 SGL Offset: Supported 00:29:29.215 Transport SGL Data Block: Not Supported 00:29:29.215 Replay Protected Memory Block: Not Supported 00:29:29.215 00:29:29.215 Firmware Slot Information 00:29:29.215 ========================= 00:29:29.215 Active slot: 1 00:29:29.215 Slot 1 Firmware Revision: 25.01 00:29:29.215 00:29:29.215 00:29:29.215 Commands Supported and Effects 00:29:29.215 ============================== 00:29:29.215 Admin Commands 00:29:29.215 -------------- 00:29:29.215 Get Log Page (02h): Supported 00:29:29.215 Identify (06h): Supported 00:29:29.215 Abort (08h): Supported 00:29:29.215 Set Features (09h): Supported 00:29:29.215 Get Features (0Ah): Supported 00:29:29.215 Asynchronous Event Request (0Ch): Supported 00:29:29.215 Keep Alive (18h): Supported 00:29:29.215 I/O Commands 00:29:29.215 ------------ 00:29:29.215 Flush (00h): Supported LBA-Change 00:29:29.215 Write (01h): Supported LBA-Change 00:29:29.215 Read (02h): Supported 00:29:29.215 Compare (05h): Supported 00:29:29.215 Write Zeroes (08h): Supported LBA-Change 00:29:29.215 Dataset Management (09h): Supported LBA-Change 00:29:29.215 Copy (19h): Supported LBA-Change 00:29:29.215 00:29:29.215 Error Log 00:29:29.215 ========= 00:29:29.215 00:29:29.215 Arbitration 00:29:29.215 =========== 00:29:29.215 Arbitration Burst: 1 00:29:29.215 00:29:29.215 Power Management 00:29:29.215 ================ 00:29:29.215 Number of Power States: 1 00:29:29.215 Current Power State: Power State #0 00:29:29.215 Power State #0: 00:29:29.215 Max Power: 0.00 W 00:29:29.215 Non-Operational State: Operational 00:29:29.215 Entry Latency: Not Reported 00:29:29.215 Exit Latency: Not Reported 00:29:29.215 Relative Read Throughput: 0 00:29:29.215 Relative Read Latency: 0 00:29:29.215 Relative Write Throughput: 0 00:29:29.215 Relative Write Latency: 0 00:29:29.215 Idle Power: Not Reported 00:29:29.215 Active Power: Not Reported 00:29:29.215 Non-Operational Permissive Mode: Not Supported 00:29:29.215 00:29:29.215 Health Information 00:29:29.215 ================== 00:29:29.215 Critical Warnings: 00:29:29.215 Available Spare Space: OK 00:29:29.215 Temperature: OK 00:29:29.215 Device Reliability: OK 00:29:29.215 Read Only: No 00:29:29.215 Volatile Memory Backup: OK 00:29:29.215 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:29.215 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:29.215 Available Spare: 0% 00:29:29.215 Available Spare Threshold: 0% 00:29:29.215 Life Percentage Used:[2024-11-20 08:26:33.776181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.215 [2024-11-20 08:26:33.776186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x229b550) 00:29:29.215 [2024-11-20 08:26:33.776193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.215 [2024-11-20 08:26:33.776205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fdb80, cid 7, qid 0 00:29:29.215 [2024-11-20 08:26:33.776390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.215 [2024-11-20 08:26:33.776397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.215 [2024-11-20 08:26:33.776401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.215 [2024-11-20 08:26:33.776404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fdb80) on tqpair=0x229b550 00:29:29.215 [2024-11-20 08:26:33.776433] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:29.215 [2024-11-20 08:26:33.776442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd100) on tqpair=0x229b550 00:29:29.215 [2024-11-20 08:26:33.776449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.215 [2024-11-20 08:26:33.776454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd280) on tqpair=0x229b550 00:29:29.215 [2024-11-20 08:26:33.776459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.215 [2024-11-20 08:26:33.776464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd400) on tqpair=0x229b550 00:29:29.215 [2024-11-20 08:26:33.776468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.215 [2024-11-20 08:26:33.776477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd580) on tqpair=0x229b550 00:29:29.215 [2024-11-20 08:26:33.776482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.215 [2024-11-20 08:26:33.776490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.215 [2024-11-20 08:26:33.776494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.215 [2024-11-20 08:26:33.776497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229b550) 00:29:29.215 [2024-11-20 08:26:33.776504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.215 [2024-11-20 08:26:33.776516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd580, cid 3, qid 0 00:29:29.215 [2024-11-20 08:26:33.776711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.215 [2024-11-20 08:26:33.776718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.215 [2024-11-20 08:26:33.776721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.215 [2024-11-20 08:26:33.776725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd580) on tqpair=0x229b550 00:29:29.215 [2024-11-20 08:26:33.776732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.215 [2024-11-20 08:26:33.776736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.215 [2024-11-20 08:26:33.776740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229b550) 00:29:29.215 [2024-11-20 08:26:33.776746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.215 [2024-11-20 08:26:33.776759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd580, cid 3, qid 0 00:29:29.215 [2024-11-20 08:26:33.776978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.215 [2024-11-20 08:26:33.776985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.215 [2024-11-20 08:26:33.776989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.215 [2024-11-20 08:26:33.776993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd580) on tqpair=0x229b550 00:29:29.215 [2024-11-20 08:26:33.776998] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:29.215 [2024-11-20 08:26:33.777002] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:29.216 [2024-11-20 08:26:33.777012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.216 [2024-11-20 08:26:33.777016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.216 [2024-11-20 08:26:33.777019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229b550) 00:29:29.216 [2024-11-20 08:26:33.777026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.216 [2024-11-20 08:26:33.777036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd580, cid 3, qid 0 00:29:29.216 [2024-11-20 08:26:33.780869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.216 [2024-11-20 08:26:33.780878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.216 [2024-11-20 08:26:33.780881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.216 [2024-11-20 08:26:33.780885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd580) on tqpair=0x229b550 00:29:29.216 [2024-11-20 08:26:33.780896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.216 [2024-11-20 08:26:33.780900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.216 [2024-11-20 08:26:33.780903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229b550) 00:29:29.216 [2024-11-20 08:26:33.780910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.216 [2024-11-20 08:26:33.780925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fd580, cid 3, qid 0 00:29:29.216 [2024-11-20 08:26:33.781090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.216 [2024-11-20 08:26:33.781097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.216 [2024-11-20 08:26:33.781100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.216 [2024-11-20 08:26:33.781104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fd580) on tqpair=0x229b550 00:29:29.216 [2024-11-20 08:26:33.781112] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:29:29.216 0% 00:29:29.216 Data Units Read: 0 00:29:29.216 Data Units Written: 0 00:29:29.216 Host Read Commands: 0 00:29:29.216 Host Write Commands: 0 00:29:29.216 Controller Busy Time: 0 minutes 00:29:29.216 Power Cycles: 0 00:29:29.216 Power On Hours: 0 hours 00:29:29.216 Unsafe Shutdowns: 0 00:29:29.216 Unrecoverable Media Errors: 0 00:29:29.216 Lifetime Error Log Entries: 0 00:29:29.216 Warning Temperature Time: 0 minutes 00:29:29.216 Critical Temperature Time: 0 minutes 00:29:29.216 00:29:29.216 Number of Queues 00:29:29.216 ================ 00:29:29.216 Number of I/O Submission Queues: 127 00:29:29.216 Number of I/O Completion Queues: 127 00:29:29.216 00:29:29.216 Active Namespaces 00:29:29.216 ================= 00:29:29.216 Namespace ID:1 00:29:29.216 Error Recovery Timeout: Unlimited 00:29:29.216 Command Set Identifier: NVM (00h) 00:29:29.216 Deallocate: Supported 00:29:29.216 Deallocated/Unwritten Error: Not Supported 00:29:29.216 Deallocated Read Value: Unknown 00:29:29.216 Deallocate in Write Zeroes: Not Supported 00:29:29.216 Deallocated Guard Field: 0xFFFF 00:29:29.216 Flush: Supported 00:29:29.216 Reservation: Supported 00:29:29.216 Namespace Sharing Capabilities: Multiple Controllers 00:29:29.216 Size (in LBAs): 131072 (0GiB) 00:29:29.216 Capacity (in LBAs): 131072 (0GiB) 00:29:29.216 Utilization (in LBAs): 131072 (0GiB) 00:29:29.216 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:29.216 EUI64: ABCDEF0123456789 00:29:29.216 UUID: fbc444a6-6168-4fb8-9417-5a902919994f 00:29:29.216 Thin Provisioning: Not Supported 00:29:29.216 Per-NS Atomic Units: Yes 00:29:29.216 Atomic Boundary Size (Normal): 0 00:29:29.216 Atomic Boundary Size (PFail): 0 00:29:29.216 Atomic Boundary Offset: 0 00:29:29.216 Maximum Single Source Range Length: 65535 00:29:29.216 Maximum Copy Length: 65535 00:29:29.216 Maximum Source Range Count: 1 00:29:29.216 NGUID/EUI64 Never Reused: No 00:29:29.216 Namespace Write Protected: No 00:29:29.216 Number of LBA Formats: 1 00:29:29.216 Current LBA Format: LBA Format #00 00:29:29.216 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:29.216 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@99 -- # sync 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # set +e 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:29.216 rmmod nvme_tcp 00:29:29.216 rmmod nvme_fabrics 00:29:29.216 rmmod nvme_keyring 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # set -e 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # return 0 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # '[' -n 2101837 ']' 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@337 -- # killprocess 2101837 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2101837 ']' 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2101837 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:29.216 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2101837 00:29:29.477 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:29.477 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:29.477 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2101837' 00:29:29.477 killing process with pid 2101837 00:29:29.477 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2101837 00:29:29.477 08:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2101837 00:29:29.477 08:26:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:29.477 08:26:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # nvmf_fini 00:29:29.477 08:26:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@254 -- # local dev 00:29:29.477 08:26:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:29.477 08:26:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:29.477 08:26:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:29.477 08:26:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # return 0 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # _dev=0 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # dev_map=() 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@274 -- # iptr 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-save 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-restore 00:29:32.023 00:29:32.023 real 0m12.528s 00:29:32.023 user 0m8.532s 00:29:32.023 sys 0m6.711s 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:32.023 ************************************ 00:29:32.023 END TEST nvmf_identify 00:29:32.023 ************************************ 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@21 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.023 ************************************ 00:29:32.023 START TEST nvmf_perf 00:29:32.023 ************************************ 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:32.023 * Looking for test storage... 00:29:32.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.023 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:32.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.024 --rc genhtml_branch_coverage=1 00:29:32.024 --rc genhtml_function_coverage=1 00:29:32.024 --rc genhtml_legend=1 00:29:32.024 --rc geninfo_all_blocks=1 00:29:32.024 --rc geninfo_unexecuted_blocks=1 00:29:32.024 00:29:32.024 ' 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:32.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.024 --rc genhtml_branch_coverage=1 00:29:32.024 --rc genhtml_function_coverage=1 00:29:32.024 --rc genhtml_legend=1 00:29:32.024 --rc geninfo_all_blocks=1 00:29:32.024 --rc geninfo_unexecuted_blocks=1 00:29:32.024 00:29:32.024 ' 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:32.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.024 --rc genhtml_branch_coverage=1 00:29:32.024 --rc genhtml_function_coverage=1 00:29:32.024 --rc genhtml_legend=1 00:29:32.024 --rc geninfo_all_blocks=1 00:29:32.024 --rc geninfo_unexecuted_blocks=1 00:29:32.024 00:29:32.024 ' 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:32.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.024 --rc genhtml_branch_coverage=1 00:29:32.024 --rc genhtml_function_coverage=1 00:29:32.024 --rc genhtml_legend=1 00:29:32.024 --rc geninfo_all_blocks=1 00:29:32.024 --rc geninfo_unexecuted_blocks=1 00:29:32.024 00:29:32.024 ' 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@50 -- # : 0 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:32.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # remove_target_ns 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # xtrace_disable 00:29:32.024 08:26:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # pci_devs=() 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # net_devs=() 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # e810=() 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # local -ga e810 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # x722=() 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # local -ga x722 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # mlx=() 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # local -ga mlx 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:40.172 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:40.172 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:40.172 Found net devices under 0000:31:00.0: cvl_0_0 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:40.172 Found net devices under 0000:31:00.1: cvl_0_1 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.172 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # is_hw=yes 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@247 -- # create_target_ns 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@28 -- # local -g _dev 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772161 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:40.173 10.0.0.1 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772162 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:40.173 10.0.0.2 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:40.173 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:40.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:40.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.526 ms 00:29:40.174 00:29:40.174 --- 10.0.0.1 ping statistics --- 00:29:40.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.174 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:29:40.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:40.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:29:40.174 00:29:40.174 --- 10.0.0.2 ping statistics --- 00:29:40.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.174 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:29:40.174 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # return 0 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:40.436 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # return 1 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev= 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@160 -- # return 0 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target1 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # return 1 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev= 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@160 -- # return 0 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:40.437 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:40.437 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # nvmfpid=2106900 00:29:40.437 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # waitforlisten 2106900 00:29:40.437 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:40.437 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2106900 ']' 00:29:40.437 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.437 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.437 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.437 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.437 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:40.437 [2024-11-20 08:26:45.065463] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:29:40.437 [2024-11-20 08:26:45.065564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.437 [2024-11-20 08:26:45.158518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:40.698 [2024-11-20 08:26:45.199964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.698 [2024-11-20 08:26:45.200001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.698 [2024-11-20 08:26:45.200009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.698 [2024-11-20 08:26:45.200016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.698 [2024-11-20 08:26:45.200022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.698 [2024-11-20 08:26:45.201633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.698 [2024-11-20 08:26:45.201749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:40.698 [2024-11-20 08:26:45.201907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.698 [2024-11-20 08:26:45.201907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:41.271 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.271 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:41.271 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:41.271 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:41.271 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:41.271 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.271 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:41.271 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:41.842 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:41.842 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:42.103 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:29:42.103 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:42.103 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:42.103 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:29:42.103 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:42.103 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:42.103 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:42.364 [2024-11-20 08:26:46.945890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.364 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.625 08:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:42.625 08:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:42.625 08:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:42.625 08:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:42.887 08:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.149 [2024-11-20 08:26:47.680579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.149 08:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:43.410 08:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:29:43.410 08:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:29:43.410 08:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:43.410 08:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:29:44.797 Initializing NVMe Controllers 00:29:44.797 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:29:44.797 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:29:44.797 Initialization complete. Launching workers. 00:29:44.797 ======================================================== 00:29:44.797 Latency(us) 00:29:44.797 Device Information : IOPS MiB/s Average min max 00:29:44.797 PCIE (0000:65:00.0) NSID 1 from core 0: 79035.39 308.73 404.31 13.27 4945.07 00:29:44.797 ======================================================== 00:29:44.797 Total : 79035.39 308.73 404.31 13.27 4945.07 00:29:44.797 00:29:44.797 08:26:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:45.739 Initializing NVMe Controllers 00:29:45.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:45.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:45.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:45.739 Initialization complete. Launching workers. 00:29:45.739 ======================================================== 00:29:45.739 Latency(us) 00:29:45.739 Device Information : IOPS MiB/s Average min max 00:29:45.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 113.00 0.44 8899.05 230.14 45337.33 00:29:45.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15236.63 7957.92 47904.96 00:29:45.739 ======================================================== 00:29:45.739 Total : 179.00 0.70 11235.81 230.14 47904.96 00:29:45.739 00:29:45.739 08:26:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:47.653 Initializing NVMe Controllers 00:29:47.653 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:47.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:47.653 Initialization complete. Launching workers. 00:29:47.653 ======================================================== 00:29:47.653 Latency(us) 00:29:47.653 Device Information : IOPS MiB/s Average min max 00:29:47.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10604.00 41.42 3026.79 521.99 10264.91 00:29:47.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3503.00 13.68 9175.13 4820.27 23753.03 00:29:47.653 ======================================================== 00:29:47.653 Total : 14107.00 55.11 4553.52 521.99 23753.03 00:29:47.653 00:29:47.653 08:26:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:47.653 08:26:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:47.653 08:26:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:50.200 Initializing NVMe Controllers 00:29:50.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.200 Controller IO queue size 128, less than required. 00:29:50.200 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.200 Controller IO queue size 128, less than required. 00:29:50.200 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:50.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:50.200 Initialization complete. Launching workers. 00:29:50.200 ======================================================== 00:29:50.200 Latency(us) 00:29:50.200 Device Information : IOPS MiB/s Average min max 00:29:50.200 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1589.87 397.47 81476.23 42654.88 121833.31 00:29:50.200 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 582.45 145.61 229808.94 78638.54 346786.43 00:29:50.200 ======================================================== 00:29:50.200 Total : 2172.33 543.08 121247.84 42654.88 346786.43 00:29:50.200 00:29:50.200 08:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:50.200 No valid NVMe controllers or AIO or URING devices found 00:29:50.200 Initializing NVMe Controllers 00:29:50.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.200 Controller IO queue size 128, less than required. 00:29:50.200 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.200 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:50.200 Controller IO queue size 128, less than required. 00:29:50.200 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.200 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:50.200 WARNING: Some requested NVMe devices were skipped 00:29:50.200 08:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:52.742 Initializing NVMe Controllers 00:29:52.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:52.742 Controller IO queue size 128, less than required. 00:29:52.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:52.742 Controller IO queue size 128, less than required. 00:29:52.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:52.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:52.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:52.742 Initialization complete. Launching workers. 00:29:52.743 00:29:52.743 ==================== 00:29:52.743 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:52.743 TCP transport: 00:29:52.743 polls: 20733 00:29:52.743 idle_polls: 12433 00:29:52.743 sock_completions: 8300 00:29:52.743 nvme_completions: 6429 00:29:52.743 submitted_requests: 9656 00:29:52.743 queued_requests: 1 00:29:52.743 00:29:52.743 ==================== 00:29:52.743 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:52.743 TCP transport: 00:29:52.743 polls: 18080 00:29:52.743 idle_polls: 7629 00:29:52.743 sock_completions: 10451 00:29:52.743 nvme_completions: 7263 00:29:52.743 submitted_requests: 10924 00:29:52.743 queued_requests: 1 00:29:52.743 ======================================================== 00:29:52.743 Latency(us) 00:29:52.743 Device Information : IOPS MiB/s Average min max 00:29:52.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1605.83 401.46 81135.71 40794.91 139469.00 00:29:52.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1814.18 453.54 71318.13 37738.33 118303.56 00:29:52.743 ======================================================== 00:29:52.743 Total : 3420.01 855.00 75927.87 37738.33 139469.00 00:29:52.743 00:29:52.743 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:52.743 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:52.743 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:29:52.743 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:52.743 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:52.743 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:52.743 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@99 -- # sync 00:29:52.743 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:52.743 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # set +e 00:29:52.743 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:52.743 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:52.743 rmmod nvme_tcp 00:29:52.743 rmmod nvme_fabrics 00:29:52.743 rmmod nvme_keyring 00:29:53.005 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:53.005 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # set -e 00:29:53.005 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # return 0 00:29:53.005 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # '[' -n 2106900 ']' 00:29:53.005 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@337 -- # killprocess 2106900 00:29:53.005 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2106900 ']' 00:29:53.005 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2106900 00:29:53.005 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:29:53.005 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.005 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2106900 00:29:53.005 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:53.005 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:53.005 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2106900' 00:29:53.005 killing process with pid 2106900 00:29:53.005 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2106900 00:29:53.005 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2106900 00:29:54.974 08:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:54.974 08:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # nvmf_fini 00:29:54.974 08:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@254 -- # local dev 00:29:54.974 08:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:54.974 08:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:54.974 08:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:54.974 08:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:56.985 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:56.985 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:56.985 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # return 0 00:29:56.985 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:56.985 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:56.985 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:56.985 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:29:56.985 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:29:56.985 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # _dev=0 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # dev_map=() 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@274 -- # iptr 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-save 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-restore 00:29:56.986 00:29:56.986 real 0m25.356s 00:29:56.986 user 0m59.078s 00:29:56.986 sys 0m9.205s 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:56.986 ************************************ 00:29:56.986 END TEST nvmf_perf 00:29:56.986 ************************************ 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.986 ************************************ 00:29:56.986 START TEST nvmf_fio_host 00:29:56.986 ************************************ 00:29:56.986 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:57.247 * Looking for test storage... 00:29:57.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.247 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:57.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.247 --rc genhtml_branch_coverage=1 00:29:57.247 --rc genhtml_function_coverage=1 00:29:57.247 --rc genhtml_legend=1 00:29:57.248 --rc geninfo_all_blocks=1 00:29:57.248 --rc geninfo_unexecuted_blocks=1 00:29:57.248 00:29:57.248 ' 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:57.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.248 --rc genhtml_branch_coverage=1 00:29:57.248 --rc genhtml_function_coverage=1 00:29:57.248 --rc genhtml_legend=1 00:29:57.248 --rc geninfo_all_blocks=1 00:29:57.248 --rc geninfo_unexecuted_blocks=1 00:29:57.248 00:29:57.248 ' 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:57.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.248 --rc genhtml_branch_coverage=1 00:29:57.248 --rc genhtml_function_coverage=1 00:29:57.248 --rc genhtml_legend=1 00:29:57.248 --rc geninfo_all_blocks=1 00:29:57.248 --rc geninfo_unexecuted_blocks=1 00:29:57.248 00:29:57.248 ' 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:57.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.248 --rc genhtml_branch_coverage=1 00:29:57.248 --rc genhtml_function_coverage=1 00:29:57.248 --rc genhtml_legend=1 00:29:57.248 --rc geninfo_all_blocks=1 00:29:57.248 --rc geninfo_unexecuted_blocks=1 00:29:57.248 00:29:57.248 ' 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@50 -- # : 0 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.248 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:57.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # remove_target_ns 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # xtrace_disable 00:29:57.249 08:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.388 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.388 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # pci_devs=() 00:30:05.388 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:05.388 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:05.388 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:05.388 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:05.388 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:05.388 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # net_devs=() 00:30:05.388 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # e810=() 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # local -ga e810 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # x722=() 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # local -ga x722 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # mlx=() 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # local -ga mlx 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:05.389 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:05.389 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:05.389 Found net devices under 0000:31:00.0: cvl_0_0 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:05.389 Found net devices under 0000:31:00.1: cvl_0_1 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # is_hw=yes 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@247 -- # create_target_ns 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@28 -- # local -g _dev 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:30:05.389 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:30:05.390 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:30:05.390 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:30:05.390 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:30:05.390 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:05.390 08:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772161 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:05.651 10.0.0.1 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772162 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:05.651 10.0.0.2 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:30:05.651 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:05.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.601 ms 00:30:05.652 00:30:05.652 --- 10.0.0.1 ping statistics --- 00:30:05.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.652 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:30:05.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:30:05.652 00:30:05.652 --- 10.0.0.2 ping statistics --- 00:30:05.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.652 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # return 0 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:05.652 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # return 1 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev= 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@160 -- # return 0 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target1 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # return 1 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev= 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@160 -- # return 0 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2114960 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2114960 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2114960 ']' 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.914 08:27:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.914 [2024-11-20 08:27:10.542259] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:30:05.915 [2024-11-20 08:27:10.542326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.915 [2024-11-20 08:27:10.633620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.175 [2024-11-20 08:27:10.675810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.175 [2024-11-20 08:27:10.675848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.175 [2024-11-20 08:27:10.675860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.175 [2024-11-20 08:27:10.675874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.175 [2024-11-20 08:27:10.675880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.175 [2024-11-20 08:27:10.677740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.175 [2024-11-20 08:27:10.677881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.175 [2024-11-20 08:27:10.678052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.175 [2024-11-20 08:27:10.678053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.745 08:27:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.745 08:27:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:30:06.745 08:27:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:07.004 [2024-11-20 08:27:11.501579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.004 08:27:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:07.004 08:27:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:07.004 08:27:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.004 08:27:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:07.264 Malloc1 00:30:07.264 08:27:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:07.264 08:27:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:07.523 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:07.784 [2024-11-20 08:27:12.285570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.784 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:07.784 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:07.784 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:07.784 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:07.784 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:07.784 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:07.784 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:07.784 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:07.784 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:30:07.784 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:07.784 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:07.784 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:07.784 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:30:07.784 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:08.072 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:08.072 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:08.072 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.072 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:08.072 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:08.072 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:08.072 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:08.072 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:08.072 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:08.072 08:27:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:08.332 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:08.332 fio-3.35 00:30:08.332 Starting 1 thread 00:30:10.896 00:30:10.896 test: (groupid=0, jobs=1): err= 0: pid=2115755: Wed Nov 20 08:27:15 2024 00:30:10.896 read: IOPS=12.9k, BW=50.5MiB/s (53.0MB/s)(101MiB/2005msec) 00:30:10.896 slat (usec): min=2, max=310, avg= 2.16, stdev= 2.68 00:30:10.896 clat (usec): min=3773, max=8988, avg=5441.37, stdev=893.06 00:30:10.896 lat (usec): min=3808, max=8990, avg=5443.53, stdev=893.13 00:30:10.896 clat percentiles (usec): 00:30:10.896 | 1.00th=[ 4359], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4883], 00:30:10.896 | 30.00th=[ 4948], 40.00th=[ 5080], 50.00th=[ 5145], 60.00th=[ 5276], 00:30:10.896 | 70.00th=[ 5407], 80.00th=[ 5604], 90.00th=[ 7177], 95.00th=[ 7570], 00:30:10.896 | 99.00th=[ 8160], 99.50th=[ 8356], 99.90th=[ 8717], 99.95th=[ 8848], 00:30:10.896 | 99.99th=[ 8848] 00:30:10.896 bw ( KiB/s): min=39808, max=55912, per=99.97%, avg=51732.00, stdev=7950.74, samples=4 00:30:10.896 iops : min= 9952, max=13978, avg=12933.00, stdev=1987.69, samples=4 00:30:10.896 write: IOPS=12.9k, BW=50.5MiB/s (52.9MB/s)(101MiB/2005msec); 0 zone resets 00:30:10.896 slat (usec): min=2, max=275, avg= 2.22, stdev= 1.87 00:30:10.896 clat (usec): min=2928, max=8081, avg=4390.54, stdev=721.71 00:30:10.896 lat (usec): min=2945, max=8083, avg=4392.76, stdev=721.81 00:30:10.896 clat percentiles (usec): 00:30:10.896 | 1.00th=[ 3490], 5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3916], 00:30:10.896 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4178], 60.00th=[ 4228], 00:30:10.896 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 5800], 95.00th=[ 6128], 00:30:10.896 | 99.00th=[ 6521], 99.50th=[ 6652], 99.90th=[ 7046], 99.95th=[ 7373], 00:30:10.896 | 99.99th=[ 8029] 00:30:10.896 bw ( KiB/s): min=40447, max=55744, per=99.96%, avg=51671.75, stdev=7487.10, samples=4 00:30:10.896 iops : min=10111, max=13936, avg=12917.75, stdev=1872.15, samples=4 00:30:10.896 lat (msec) : 4=14.30%, 10=85.70% 00:30:10.896 cpu : usr=74.70%, sys=24.00%, ctx=41, majf=0, minf=19 00:30:10.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:10.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:10.896 issued rwts: total=25939,25910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:10.896 00:30:10.896 Run status group 0 (all jobs): 00:30:10.896 READ: bw=50.5MiB/s (53.0MB/s), 50.5MiB/s-50.5MiB/s (53.0MB/s-53.0MB/s), io=101MiB (106MB), run=2005-2005msec 00:30:10.896 WRITE: bw=50.5MiB/s (52.9MB/s), 50.5MiB/s-50.5MiB/s (52.9MB/s-52.9MB/s), io=101MiB (106MB), run=2005-2005msec 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:10.896 08:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:11.161 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:11.161 fio-3.35 00:30:11.161 Starting 1 thread 00:30:13.703 00:30:13.703 test: (groupid=0, jobs=1): err= 0: pid=2116352: Wed Nov 20 08:27:18 2024 00:30:13.703 read: IOPS=9163, BW=143MiB/s (150MB/s)(287MiB/2007msec) 00:30:13.703 slat (usec): min=3, max=113, avg= 3.66, stdev= 1.67 00:30:13.703 clat (usec): min=1521, max=49989, avg=8425.90, stdev=3446.14 00:30:13.703 lat (usec): min=1525, max=49993, avg=8429.57, stdev=3446.26 00:30:13.703 clat percentiles (usec): 00:30:13.703 | 1.00th=[ 4228], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 6521], 00:30:13.703 | 30.00th=[ 7177], 40.00th=[ 7635], 50.00th=[ 8160], 60.00th=[ 8586], 00:30:13.703 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[10683], 95.00th=[11600], 00:30:13.703 | 99.00th=[15008], 99.50th=[43779], 99.90th=[48497], 99.95th=[49546], 00:30:13.703 | 99.99th=[50070] 00:30:13.703 bw ( KiB/s): min=63968, max=88608, per=49.58%, avg=72688.00, stdev=10912.50, samples=4 00:30:13.703 iops : min= 3998, max= 5538, avg=4543.00, stdev=682.03, samples=4 00:30:13.703 write: IOPS=5229, BW=81.7MiB/s (85.7MB/s)(148MiB/1806msec); 0 zone resets 00:30:13.703 slat (usec): min=39, max=333, avg=41.27, stdev= 8.10 00:30:13.703 clat (usec): min=2188, max=50478, avg=9680.43, stdev=2810.73 00:30:13.703 lat (usec): min=2228, max=50518, avg=9721.70, stdev=2812.09 00:30:13.703 clat percentiles (usec): 00:30:13.703 | 1.00th=[ 6652], 5.00th=[ 7242], 10.00th=[ 7570], 20.00th=[ 8160], 00:30:13.703 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:30:13.703 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11863], 95.00th=[12780], 00:30:13.703 | 99.00th=[14615], 99.50th=[16319], 99.90th=[49546], 99.95th=[50070], 00:30:13.703 | 99.99th=[50594] 00:30:13.703 bw ( KiB/s): min=66912, max=91968, per=89.95%, avg=75256.00, stdev=11320.33, samples=4 00:30:13.703 iops : min= 4182, max= 5748, avg=4703.50, stdev=707.52, samples=4 00:30:13.703 lat (msec) : 2=0.03%, 4=0.42%, 10=74.34%, 20=24.76%, 50=0.44% 00:30:13.703 lat (msec) : 100=0.01% 00:30:13.703 cpu : usr=86.39%, sys=12.26%, ctx=24, majf=0, minf=37 00:30:13.703 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:30:13.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:13.703 issued rwts: total=18391,9444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.703 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:13.703 00:30:13.703 Run status group 0 (all jobs): 00:30:13.703 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=287MiB (301MB), run=2007-2007msec 00:30:13.703 WRITE: bw=81.7MiB/s (85.7MB/s), 81.7MiB/s-81.7MiB/s (85.7MB/s-85.7MB/s), io=148MiB (155MB), run=1806-1806msec 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@99 -- # sync 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # set +e 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:13.703 rmmod nvme_tcp 00:30:13.703 rmmod nvme_fabrics 00:30:13.703 rmmod nvme_keyring 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # set -e 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # return 0 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # '[' -n 2114960 ']' 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@337 -- # killprocess 2114960 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2114960 ']' 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2114960 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2114960 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2114960' 00:30:13.703 killing process with pid 2114960 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2114960 00:30:13.703 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2114960 00:30:13.964 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:13.964 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # nvmf_fini 00:30:13.964 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@254 -- # local dev 00:30:13.964 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:30:13.964 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:13.964 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:13.964 08:27:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # return 0 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # _dev=0 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # dev_map=() 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@274 -- # iptr 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-save 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-restore 00:30:15.877 00:30:15.877 real 0m18.909s 00:30:15.877 user 1m8.026s 00:30:15.877 sys 0m8.305s 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:15.877 08:27:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.877 ************************************ 00:30:15.877 END TEST nvmf_fio_host 00:30:15.877 ************************************ 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.138 ************************************ 00:30:16.138 START TEST nvmf_failover 00:30:16.138 ************************************ 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:16.138 * Looking for test storage... 00:30:16.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:16.138 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:16.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.402 --rc genhtml_branch_coverage=1 00:30:16.402 --rc genhtml_function_coverage=1 00:30:16.402 --rc genhtml_legend=1 00:30:16.402 --rc geninfo_all_blocks=1 00:30:16.402 --rc geninfo_unexecuted_blocks=1 00:30:16.402 00:30:16.402 ' 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:16.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.402 --rc genhtml_branch_coverage=1 00:30:16.402 --rc genhtml_function_coverage=1 00:30:16.402 --rc genhtml_legend=1 00:30:16.402 --rc geninfo_all_blocks=1 00:30:16.402 --rc geninfo_unexecuted_blocks=1 00:30:16.402 00:30:16.402 ' 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:16.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.402 --rc genhtml_branch_coverage=1 00:30:16.402 --rc genhtml_function_coverage=1 00:30:16.402 --rc genhtml_legend=1 00:30:16.402 --rc geninfo_all_blocks=1 00:30:16.402 --rc geninfo_unexecuted_blocks=1 00:30:16.402 00:30:16.402 ' 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:16.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.402 --rc genhtml_branch_coverage=1 00:30:16.402 --rc genhtml_function_coverage=1 00:30:16.402 --rc genhtml_legend=1 00:30:16.402 --rc geninfo_all_blocks=1 00:30:16.402 --rc geninfo_unexecuted_blocks=1 00:30:16.402 00:30:16.402 ' 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.402 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@50 -- # : 0 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:30:16.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # remove_target_ns 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # xtrace_disable 00:30:16.403 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # pci_devs=() 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # net_devs=() 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # e810=() 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # local -ga e810 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # x722=() 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # local -ga x722 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # mlx=() 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # local -ga mlx 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:24.547 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:24.547 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:24.547 Found net devices under 0000:31:00.0: cvl_0_0 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:24.547 Found net devices under 0000:31:00.1: cvl_0_1 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # is_hw=yes 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@247 -- # create_target_ns 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:30:24.547 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@28 -- # local -g _dev 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:24.548 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772161 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:24.548 10.0.0.1 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772162 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:24.548 10.0.0.2 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:30:24.548 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:24.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:24.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.547 ms 00:30:24.811 00:30:24.811 --- 10.0.0.1 ping statistics --- 00:30:24.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.811 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:30:24.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:24.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:30:24.811 00:30:24.811 --- 10.0.0.2 ping statistics --- 00:30:24.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.811 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair++ )) 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # return 0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # return 1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev= 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@160 -- # return 0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:30:24.811 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target1 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target1 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # return 1 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev= 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@160 -- # return 0 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # nvmfpid=2121621 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # waitforlisten 2121621 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2121621 ']' 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:24.812 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:24.812 [2024-11-20 08:27:29.501421] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:30:24.812 [2024-11-20 08:27:29.501490] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:25.073 [2024-11-20 08:27:29.609109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:25.073 [2024-11-20 08:27:29.661275] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:25.073 [2024-11-20 08:27:29.661327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:25.073 [2024-11-20 08:27:29.661335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:25.073 [2024-11-20 08:27:29.661342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:25.073 [2024-11-20 08:27:29.661349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:25.073 [2024-11-20 08:27:29.663469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:25.073 [2024-11-20 08:27:29.663635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.073 [2024-11-20 08:27:29.663635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:25.645 08:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:25.645 08:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:30:25.645 08:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:25.645 08:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:25.645 08:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:25.645 08:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:25.645 08:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:25.906 [2024-11-20 08:27:30.511874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.906 08:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:26.167 Malloc0 00:30:26.167 08:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:26.428 08:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:26.428 08:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.690 [2024-11-20 08:27:31.274678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.690 08:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:26.951 [2024-11-20 08:27:31.451147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:26.951 08:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:26.951 [2024-11-20 08:27:31.631726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:26.951 08:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2121989 00:30:26.951 08:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:26.951 08:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:26.951 08:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2121989 /var/tmp/bdevperf.sock 00:30:26.951 08:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2121989 ']' 00:30:26.951 08:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:26.951 08:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.951 08:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:26.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:26.951 08:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.951 08:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:27.894 08:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:27.894 08:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:30:27.895 08:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:28.155 NVMe0n1 00:30:28.155 08:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:28.728 00:30:28.728 08:27:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2122323 00:30:28.728 08:27:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:28.728 08:27:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:29.672 08:27:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:29.672 [2024-11-20 08:27:34.373348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.672 [2024-11-20 08:27:34.373597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.673 [2024-11-20 08:27:34.373824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126b390 is same with the state(6) to be set 00:30:29.934 08:27:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:33.234 08:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:33.234 00:30:33.234 08:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:33.234 [2024-11-20 08:27:37.852124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.234 [2024-11-20 08:27:37.852160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.234 [2024-11-20 08:27:37.852166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.234 [2024-11-20 08:27:37.852170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.234 [2024-11-20 08:27:37.852175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.234 [2024-11-20 08:27:37.852180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.234 [2024-11-20 08:27:37.852184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.234 [2024-11-20 08:27:37.852189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.234 [2024-11-20 08:27:37.852194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.234 [2024-11-20 08:27:37.852199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.234 [2024-11-20 08:27:37.852203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.234 [2024-11-20 08:27:37.852208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.234 [2024-11-20 08:27:37.852212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.234 [2024-11-20 08:27:37.852217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.235 [2024-11-20 08:27:37.852535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.236 [2024-11-20 08:27:37.852540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.236 [2024-11-20 08:27:37.852544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.236 [2024-11-20 08:27:37.852549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.236 [2024-11-20 08:27:37.852554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.236 [2024-11-20 08:27:37.852558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.236 [2024-11-20 08:27:37.852563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.236 [2024-11-20 08:27:37.852568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.236 [2024-11-20 08:27:37.852572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c140 is same with the state(6) to be set 00:30:33.236 08:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:36.536 08:27:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:36.536 [2024-11-20 08:27:41.040954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.536 08:27:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:37.476 08:27:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:37.737 [2024-11-20 08:27:42.228228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 [2024-11-20 08:27:42.228442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d090 is same with the state(6) to be set 00:30:37.737 08:27:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2122323 00:30:44.332 { 00:30:44.332 "results": [ 00:30:44.332 { 00:30:44.332 "job": "NVMe0n1", 00:30:44.332 "core_mask": "0x1", 00:30:44.332 "workload": "verify", 00:30:44.332 "status": "finished", 00:30:44.332 "verify_range": { 00:30:44.332 "start": 0, 00:30:44.332 "length": 16384 00:30:44.332 }, 00:30:44.332 "queue_depth": 128, 00:30:44.332 "io_size": 4096, 00:30:44.332 "runtime": 15.011556, 00:30:44.332 "iops": 11190.445547416937, 00:30:44.332 "mibps": 43.71267791959741, 00:30:44.332 "io_failed": 6613, 00:30:44.332 "io_timeout": 0, 00:30:44.332 "avg_latency_us": 10977.667394047696, 00:30:44.332 "min_latency_us": 709.9733333333334, 00:30:44.332 "max_latency_us": 30146.56 00:30:44.332 } 00:30:44.332 ], 00:30:44.332 "core_count": 1 00:30:44.332 } 00:30:44.332 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2121989 00:30:44.332 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2121989 ']' 00:30:44.332 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2121989 00:30:44.332 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:30:44.332 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:44.332 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2121989 00:30:44.332 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:44.332 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:44.332 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2121989' 00:30:44.332 killing process with pid 2121989 00:30:44.332 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2121989 00:30:44.332 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2121989 00:30:44.332 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:44.332 [2024-11-20 08:27:31.708800] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:30:44.332 [2024-11-20 08:27:31.708915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121989 ] 00:30:44.332 [2024-11-20 08:27:31.791946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.332 [2024-11-20 08:27:31.827753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.332 Running I/O for 15 seconds... 00:30:44.332 11285.00 IOPS, 44.08 MiB/s [2024-11-20T07:27:49.061Z] [2024-11-20 08:27:34.376416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.332 [2024-11-20 08:27:34.376452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.332 [2024-11-20 08:27:34.376469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.332 [2024-11-20 08:27:34.376477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.332 [2024-11-20 08:27:34.376487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.332 [2024-11-20 08:27:34.376495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.332 [2024-11-20 08:27:34.376504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.332 [2024-11-20 08:27:34.376512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.332 [2024-11-20 08:27:34.376523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.332 [2024-11-20 08:27:34.376530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.332 [2024-11-20 08:27:34.376540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.332 [2024-11-20 08:27:34.376547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.332 [2024-11-20 08:27:34.376556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.332 [2024-11-20 08:27:34.376564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.332 [2024-11-20 08:27:34.376573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.332 [2024-11-20 08:27:34.376581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.332 [2024-11-20 08:27:34.376590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.332 [2024-11-20 08:27:34.376597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.332 [2024-11-20 08:27:34.376607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.376986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.376993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.377010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.377027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.377044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.377061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.377084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.377101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.377118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.377135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.377152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.377169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.377186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.377202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.377219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.377236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.333 [2024-11-20 08:27:34.377252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.333 [2024-11-20 08:27:34.377270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.333 [2024-11-20 08:27:34.377280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.333 [2024-11-20 08:27:34.377289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.334 [2024-11-20 08:27:34.377760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.334 [2024-11-20 08:27:34.377876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.334 [2024-11-20 08:27:34.377884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.377893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.377900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.377910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.377917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.377926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.377936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.377945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.377953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.377962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.377969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.377978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.377986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.377995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.335 [2024-11-20 08:27:34.378307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.335 [2024-11-20 08:27:34.378336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98048 len:8 PRP1 0x0 PRP2 0x0 00:30:44.335 [2024-11-20 08:27:34.378344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.335 [2024-11-20 08:27:34.378361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.335 [2024-11-20 08:27:34.378367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98056 len:8 PRP1 0x0 PRP2 0x0 00:30:44.335 [2024-11-20 08:27:34.378374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.335 [2024-11-20 08:27:34.378390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.335 [2024-11-20 08:27:34.378396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98064 len:8 PRP1 0x0 PRP2 0x0 00:30:44.335 [2024-11-20 08:27:34.378403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.335 [2024-11-20 08:27:34.378416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.335 [2024-11-20 08:27:34.378423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98072 len:8 PRP1 0x0 PRP2 0x0 00:30:44.335 [2024-11-20 08:27:34.378430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.335 [2024-11-20 08:27:34.378443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.335 [2024-11-20 08:27:34.378449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98080 len:8 PRP1 0x0 PRP2 0x0 00:30:44.335 [2024-11-20 08:27:34.378457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.335 [2024-11-20 08:27:34.378470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.335 [2024-11-20 08:27:34.378476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98088 len:8 PRP1 0x0 PRP2 0x0 00:30:44.335 [2024-11-20 08:27:34.378483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.335 [2024-11-20 08:27:34.378497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.335 [2024-11-20 08:27:34.378503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98096 len:8 PRP1 0x0 PRP2 0x0 00:30:44.335 [2024-11-20 08:27:34.378510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.335 [2024-11-20 08:27:34.378518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.335 [2024-11-20 08:27:34.378523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.335 [2024-11-20 08:27:34.378530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98104 len:8 PRP1 0x0 PRP2 0x0 00:30:44.335 [2024-11-20 08:27:34.378537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.378549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.336 [2024-11-20 08:27:34.378554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.336 [2024-11-20 08:27:34.378560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98112 len:8 PRP1 0x0 PRP2 0x0 00:30:44.336 [2024-11-20 08:27:34.378568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.378575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.336 [2024-11-20 08:27:34.378581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.336 [2024-11-20 08:27:34.378587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98120 len:8 PRP1 0x0 PRP2 0x0 00:30:44.336 [2024-11-20 08:27:34.378596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.378604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.336 [2024-11-20 08:27:34.378609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.336 [2024-11-20 08:27:34.378616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98128 len:8 PRP1 0x0 PRP2 0x0 00:30:44.336 [2024-11-20 08:27:34.378623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.378630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.336 [2024-11-20 08:27:34.378636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.336 [2024-11-20 08:27:34.378642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98136 len:8 PRP1 0x0 PRP2 0x0 00:30:44.336 [2024-11-20 08:27:34.378649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.378657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.336 [2024-11-20 08:27:34.378663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.336 [2024-11-20 08:27:34.378669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98144 len:8 PRP1 0x0 PRP2 0x0 00:30:44.336 [2024-11-20 08:27:34.378676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.378683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.336 [2024-11-20 08:27:34.378689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.336 [2024-11-20 08:27:34.378695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98152 len:8 PRP1 0x0 PRP2 0x0 00:30:44.336 [2024-11-20 08:27:34.378702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.378710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.336 [2024-11-20 08:27:34.378716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.336 [2024-11-20 08:27:34.378722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98160 len:8 PRP1 0x0 PRP2 0x0 00:30:44.336 [2024-11-20 08:27:34.378729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.378737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.336 [2024-11-20 08:27:34.378743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.336 [2024-11-20 08:27:34.389851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98168 len:8 PRP1 0x0 PRP2 0x0 00:30:44.336 [2024-11-20 08:27:34.389887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.389902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.336 [2024-11-20 08:27:34.389910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.336 [2024-11-20 08:27:34.389916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98176 len:8 PRP1 0x0 PRP2 0x0 00:30:44.336 [2024-11-20 08:27:34.389924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.389932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.336 [2024-11-20 08:27:34.389937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.336 [2024-11-20 08:27:34.389948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98184 len:8 PRP1 0x0 PRP2 0x0 00:30:44.336 [2024-11-20 08:27:34.389956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.389963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.336 [2024-11-20 08:27:34.389969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.336 [2024-11-20 08:27:34.389975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98192 len:8 PRP1 0x0 PRP2 0x0 00:30:44.336 [2024-11-20 08:27:34.389984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.390026] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:44.336 [2024-11-20 08:27:34.390056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.336 [2024-11-20 08:27:34.390065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.390074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.336 [2024-11-20 08:27:34.390082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.390091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.336 [2024-11-20 08:27:34.390098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.390106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.336 [2024-11-20 08:27:34.390113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:34.390121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:44.336 [2024-11-20 08:27:34.390169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a92d80 (9): Bad file descriptor 00:30:44.336 [2024-11-20 08:27:34.393700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:44.336 [2024-11-20 08:27:34.459763] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:30:44.336 10888.00 IOPS, 42.53 MiB/s [2024-11-20T07:27:49.065Z] 11132.00 IOPS, 43.48 MiB/s [2024-11-20T07:27:49.065Z] 11137.25 IOPS, 43.50 MiB/s [2024-11-20T07:27:49.065Z] [2024-11-20 08:27:37.853037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.336 [2024-11-20 08:27:37.853075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:37.853091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.336 [2024-11-20 08:27:37.853100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:37.853110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.336 [2024-11-20 08:27:37.853118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:37.853127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.336 [2024-11-20 08:27:37.853139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:37.853149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.336 [2024-11-20 08:27:37.853156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:37.853166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.336 [2024-11-20 08:27:37.853173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:37.853182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.336 [2024-11-20 08:27:37.853190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:37.853199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.336 [2024-11-20 08:27:37.853206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:37.853216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.336 [2024-11-20 08:27:37.853223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:37.853233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.336 [2024-11-20 08:27:37.853240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:37.853249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.336 [2024-11-20 08:27:37.853257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:37.853266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.336 [2024-11-20 08:27:37.853273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:37.853282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.336 [2024-11-20 08:27:37.853290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.336 [2024-11-20 08:27:37.853299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.337 [2024-11-20 08:27:37.853837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.337 [2024-11-20 08:27:37.853845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.853854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.853868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.853878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.853885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.853895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.853902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.853912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.853919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.853929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.853936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.853945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.853952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.853962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.853969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.853978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.853986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:33120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:33216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.338 [2024-11-20 08:27:37.854510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.338 [2024-11-20 08:27:37.854519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.339 [2024-11-20 08:27:37.854526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.339 [2024-11-20 08:27:37.854543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.339 [2024-11-20 08:27:37.854559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.339 [2024-11-20 08:27:37.854576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.339 [2024-11-20 08:27:37.854592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.339 [2024-11-20 08:27:37.854609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.339 [2024-11-20 08:27:37.854627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:33456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:33496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.854992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.854999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.855008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.855016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.855025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.855032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.855041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.855048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.855059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.855066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.855075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.855083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.855092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.855099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.855108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.339 [2024-11-20 08:27:37.855115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.855124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.339 [2024-11-20 08:27:37.855131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.855141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.339 [2024-11-20 08:27:37.855148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.855157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.339 [2024-11-20 08:27:37.855164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.339 [2024-11-20 08:27:37.855173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.339 [2024-11-20 08:27:37.855180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:37.855189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.340 [2024-11-20 08:27:37.855196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:37.855209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.340 [2024-11-20 08:27:37.855216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:37.855240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.340 [2024-11-20 08:27:37.855247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.340 [2024-11-20 08:27:37.855253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33448 len:8 PRP1 0x0 PRP2 0x0 00:30:44.340 [2024-11-20 08:27:37.855261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:37.855302] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:44.340 [2024-11-20 08:27:37.855324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.340 [2024-11-20 08:27:37.855334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:37.855343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.340 [2024-11-20 08:27:37.855350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:37.855358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.340 [2024-11-20 08:27:37.855366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:37.855374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.340 [2024-11-20 08:27:37.855381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:37.855389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:44.340 [2024-11-20 08:27:37.855422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a92d80 (9): Bad file descriptor 00:30:44.340 [2024-11-20 08:27:37.858994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:44.340 [2024-11-20 08:27:37.922699] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:30:44.340 11003.00 IOPS, 42.98 MiB/s [2024-11-20T07:27:49.069Z] 11013.83 IOPS, 43.02 MiB/s [2024-11-20T07:27:49.069Z] 11031.71 IOPS, 43.09 MiB/s [2024-11-20T07:27:49.069Z] 11069.75 IOPS, 43.24 MiB/s [2024-11-20T07:27:49.069Z] [2024-11-20 08:27:42.231698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.340 [2024-11-20 08:27:42.231737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.231754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.231762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.231772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.231780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.231789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.231797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.231806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.231813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.231822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.231830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.231840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.231847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.231864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.231872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.231882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.231889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.231898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.231906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.231915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.231922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.231932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.231939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.231949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.231957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.231966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.231974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.231983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.231990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.231999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.232007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.232016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.232024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.232033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.232041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.232050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.232057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.232066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.232076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.232086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.232093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.232103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.232111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.232121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.232128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.232137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.232144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.232153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.232160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.232170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.232177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.232186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.232194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.232203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.232210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.340 [2024-11-20 08:27:42.232219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.340 [2024-11-20 08:27:42.232226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.341 [2024-11-20 08:27:42.232775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.341 [2024-11-20 08:27:42.232785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.342 [2024-11-20 08:27:42.232792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.232801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.342 [2024-11-20 08:27:42.232808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.232828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.232835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45968 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.232843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.232885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.342 [2024-11-20 08:27:42.232895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.232905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.342 [2024-11-20 08:27:42.232912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.232920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.342 [2024-11-20 08:27:42.232928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.232935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.342 [2024-11-20 08:27:42.232942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.232950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a92d80 is same with the state(6) to be set 00:30:44.342 [2024-11-20 08:27:42.233102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45976 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45984 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45992 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46000 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46008 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46016 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46024 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46032 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46040 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46048 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46056 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46064 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46072 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46080 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46088 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46096 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46104 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46112 len:8 PRP1 0x0 PRP2 0x0 00:30:44.342 [2024-11-20 08:27:42.233581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.342 [2024-11-20 08:27:42.233589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.342 [2024-11-20 08:27:42.233594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.342 [2024-11-20 08:27:42.233600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46120 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.233607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.233615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.233620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.233627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46128 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.233634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.233641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.233647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.233653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46136 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.233660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.233668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.233673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.233681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46144 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.233688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.233696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.233702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.233708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46152 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.233715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.233723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.233733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.233739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46160 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.233747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.233754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.233760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.233766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46168 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.233773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.233781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.233787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.233793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46176 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.233800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.233808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.233814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.233820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46184 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.233827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.233835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.233840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.233847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46192 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.233854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.233865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.233871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.233877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46200 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.233884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.233892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.233898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.233905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46208 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.233912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.233920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.233925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.233931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46216 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.233939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.233948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.233954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.233960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46224 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.233967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.233975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.233981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.233987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46232 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.233994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.234002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.234007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.234013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46240 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.234021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.234028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.234034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.234040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46248 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.234047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.234055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.234061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.234067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46256 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.234075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.234083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.234088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.234095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46264 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.234102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.234111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.234116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.234122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46272 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.234130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.234137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.234143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.234149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46280 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.234158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.234166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.234171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.234178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46288 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.234185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.234192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.234198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.234204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46296 len:8 PRP1 0x0 PRP2 0x0 00:30:44.343 [2024-11-20 08:27:42.234211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.343 [2024-11-20 08:27:42.234219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.343 [2024-11-20 08:27:42.234224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.343 [2024-11-20 08:27:42.234231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46304 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.234238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.234246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.234251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.234257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46312 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.234265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.234272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.234278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.234284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46320 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.234291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.244661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.244688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.244699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46328 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.244708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.244717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.244722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.244730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46336 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.244738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.244745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.244751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.244762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46344 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.244769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.244777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.244783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.244789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46352 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.244796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.244804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.244810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.244816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46360 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.244823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.244831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.244836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.244842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46368 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.244850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.244859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.244873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.244880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46376 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.244888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.244895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.244901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.244907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46384 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.244915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.244923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.244929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.244935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46392 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.244942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.244949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.244955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.244961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46400 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.244968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.244976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.244983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.244990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46408 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.244997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.245004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.245010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.245017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46416 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.245024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.245032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.245037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.245043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46424 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.245051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.245058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.245064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.245070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46432 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.245077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.245085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.245090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.245096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46440 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.245104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.245112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.245117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.245123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46448 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.245130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.245138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.245144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.245150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46456 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.245157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.245165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.245170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.245176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46464 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.245184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.245193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.245199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.245205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46472 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.245212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.245220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.245225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.245231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45456 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.245239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.344 [2024-11-20 08:27:42.245247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.344 [2024-11-20 08:27:42.245252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.344 [2024-11-20 08:27:42.245258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45464 len:8 PRP1 0x0 PRP2 0x0 00:30:44.344 [2024-11-20 08:27:42.245265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45472 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45480 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45488 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45496 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45504 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45512 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45520 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45528 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45536 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45544 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45552 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45560 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45568 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45576 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45584 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45592 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45600 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45608 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45616 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45624 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45632 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45640 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.345 [2024-11-20 08:27:42.245858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.345 [2024-11-20 08:27:42.245867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.345 [2024-11-20 08:27:42.245873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45648 len:8 PRP1 0x0 PRP2 0x0 00:30:44.345 [2024-11-20 08:27:42.245880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.245888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.245893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.245899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45656 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.245906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.245914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.245920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.245926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45664 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.245933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.245941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.245946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.245952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45672 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.245959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.245967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.245972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.245978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45680 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.245985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.245993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.245998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.246004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45688 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.246013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.246021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.246027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.246033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45696 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.246040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.246048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.246054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.246060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45704 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.246067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.246074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.246080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.246086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45712 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.246093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.246100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.246106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.246112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45720 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.246119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.246127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.246132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.246138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45728 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.246146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.246153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.246159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.246164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45736 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.246172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.246179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.246185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.246191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45744 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.246198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.246206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.246211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.246219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45752 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.246226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.246233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.246239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.246245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45760 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.246253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.246260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.246265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.246271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45768 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.246278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.246286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.246292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.246298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45776 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.246305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.246312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.246317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.246323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45784 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.246331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.246338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.246344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.254163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45792 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.254191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.254204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.254210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.254217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45800 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.254225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.254232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.254238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.346 [2024-11-20 08:27:42.254244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45808 len:8 PRP1 0x0 PRP2 0x0 00:30:44.346 [2024-11-20 08:27:42.254251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.346 [2024-11-20 08:27:42.254259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.346 [2024-11-20 08:27:42.254269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45816 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45824 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45832 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45840 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45848 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45856 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45864 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45872 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45880 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45888 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45896 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45904 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45912 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45920 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45928 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45936 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45944 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45952 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45960 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.347 [2024-11-20 08:27:42.254772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.347 [2024-11-20 08:27:42.254778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45968 len:8 PRP1 0x0 PRP2 0x0 00:30:44.347 [2024-11-20 08:27:42.254786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.347 [2024-11-20 08:27:42.254828] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:44.347 [2024-11-20 08:27:42.254838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:44.347 [2024-11-20 08:27:42.254890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a92d80 (9): Bad file descriptor 00:30:44.347 [2024-11-20 08:27:42.258367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:44.347 [2024-11-20 08:27:42.286987] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:30:44.347 11003.78 IOPS, 42.98 MiB/s [2024-11-20T07:27:49.076Z] 11094.00 IOPS, 43.34 MiB/s [2024-11-20T07:27:49.076Z] 11125.00 IOPS, 43.46 MiB/s [2024-11-20T07:27:49.076Z] 11150.33 IOPS, 43.56 MiB/s [2024-11-20T07:27:49.076Z] 11168.46 IOPS, 43.63 MiB/s [2024-11-20T07:27:49.076Z] 11185.43 IOPS, 43.69 MiB/s [2024-11-20T07:27:49.076Z] 11190.53 IOPS, 43.71 MiB/s 00:30:44.347 Latency(us) 00:30:44.347 [2024-11-20T07:27:49.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.347 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:44.347 Verification LBA range: start 0x0 length 0x4000 00:30:44.347 NVMe0n1 : 15.01 11190.45 43.71 440.53 0.00 10977.67 709.97 30146.56 00:30:44.347 [2024-11-20T07:27:49.076Z] =================================================================================================================== 00:30:44.347 [2024-11-20T07:27:49.077Z] Total : 11190.45 43.71 440.53 0.00 10977.67 709.97 30146.56 00:30:44.348 Received shutdown signal, test time was about 15.000000 seconds 00:30:44.348 00:30:44.348 Latency(us) 00:30:44.348 [2024-11-20T07:27:49.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.348 [2024-11-20T07:27:49.077Z] =================================================================================================================== 00:30:44.348 [2024-11-20T07:27:49.077Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:44.348 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:44.348 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:44.348 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:44.348 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2125336 00:30:44.348 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2125336 /var/tmp/bdevperf.sock 00:30:44.348 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:44.348 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2125336 ']' 00:30:44.348 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:44.348 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:44.348 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:44.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:44.348 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:44.348 08:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:44.927 08:27:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.927 08:27:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:30:44.927 08:27:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:44.927 [2024-11-20 08:27:49.569722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:44.927 08:27:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:45.188 [2024-11-20 08:27:49.754193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:45.188 08:27:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:45.549 NVMe0n1 00:30:45.549 08:27:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:45.827 00:30:45.827 08:27:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:45.827 00:30:45.827 08:27:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:45.827 08:27:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:46.087 08:27:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:46.347 08:27:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:49.648 08:27:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:49.648 08:27:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:49.648 08:27:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:49.648 08:27:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2126355 00:30:49.648 08:27:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2126355 00:30:50.588 { 00:30:50.588 "results": [ 00:30:50.588 { 00:30:50.588 "job": "NVMe0n1", 00:30:50.588 "core_mask": "0x1", 00:30:50.588 "workload": "verify", 00:30:50.588 "status": "finished", 00:30:50.588 "verify_range": { 00:30:50.588 "start": 0, 00:30:50.588 "length": 16384 00:30:50.588 }, 00:30:50.588 "queue_depth": 128, 00:30:50.588 "io_size": 4096, 00:30:50.588 "runtime": 1.00605, 00:30:50.588 "iops": 11229.064161820983, 00:30:50.588 "mibps": 43.863531882113215, 00:30:50.588 "io_failed": 0, 00:30:50.588 "io_timeout": 0, 00:30:50.588 "avg_latency_us": 11341.047607919507, 00:30:50.588 "min_latency_us": 1256.1066666666666, 00:30:50.588 "max_latency_us": 14090.24 00:30:50.588 } 00:30:50.588 ], 00:30:50.588 "core_count": 1 00:30:50.588 } 00:30:50.588 08:27:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:50.588 [2024-11-20 08:27:48.617921] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:30:50.588 [2024-11-20 08:27:48.617980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125336 ] 00:30:50.588 [2024-11-20 08:27:48.696122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.588 [2024-11-20 08:27:48.731831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.588 [2024-11-20 08:27:50.855735] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:50.588 [2024-11-20 08:27:50.855787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.588 [2024-11-20 08:27:50.855799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.588 [2024-11-20 08:27:50.855809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.588 [2024-11-20 08:27:50.855817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.588 [2024-11-20 08:27:50.855825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.588 [2024-11-20 08:27:50.855832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.588 [2024-11-20 08:27:50.855841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.588 [2024-11-20 08:27:50.855848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.588 [2024-11-20 08:27:50.855860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:30:50.588 [2024-11-20 08:27:50.855892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:30:50.588 [2024-11-20 08:27:50.855908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e6d80 (9): Bad file descriptor 00:30:50.588 [2024-11-20 08:27:50.862816] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:30:50.588 Running I/O for 1 seconds... 00:30:50.589 11163.00 IOPS, 43.61 MiB/s 00:30:50.589 Latency(us) 00:30:50.589 [2024-11-20T07:27:55.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.589 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:50.589 Verification LBA range: start 0x0 length 0x4000 00:30:50.589 NVMe0n1 : 1.01 11229.06 43.86 0.00 0.00 11341.05 1256.11 14090.24 00:30:50.589 [2024-11-20T07:27:55.318Z] =================================================================================================================== 00:30:50.589 [2024-11-20T07:27:55.318Z] Total : 11229.06 43.86 0.00 0.00 11341.05 1256.11 14090.24 00:30:50.589 08:27:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:50.589 08:27:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:50.850 08:27:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:50.850 08:27:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:50.850 08:27:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:51.110 08:27:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:51.370 08:27:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:54.676 08:27:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:54.676 08:27:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:54.676 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2125336 00:30:54.676 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2125336 ']' 00:30:54.676 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2125336 00:30:54.676 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:30:54.676 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:54.676 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2125336 00:30:54.676 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:54.676 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:54.676 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2125336' 00:30:54.676 killing process with pid 2125336 00:30:54.676 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2125336 00:30:54.676 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2125336 00:30:54.676 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:54.676 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@99 -- # sync 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # set +e 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:54.941 rmmod nvme_tcp 00:30:54.941 rmmod nvme_fabrics 00:30:54.941 rmmod nvme_keyring 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # set -e 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # return 0 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # '[' -n 2121621 ']' 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@337 -- # killprocess 2121621 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2121621 ']' 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2121621 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2121621 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2121621' 00:30:54.941 killing process with pid 2121621 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2121621 00:30:54.941 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2121621 00:30:55.202 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:55.202 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # nvmf_fini 00:30:55.202 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@254 -- # local dev 00:30:55.202 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@257 -- # remove_target_ns 00:30:55.202 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:55.202 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:55.202 08:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@258 -- # delete_main_bridge 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # return 0 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # _dev=0 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # dev_map=() 00:30:57.115 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@274 -- # iptr 00:30:57.116 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-save 00:30:57.116 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:30:57.116 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-restore 00:30:57.377 00:30:57.377 real 0m41.176s 00:30:57.377 user 2m3.549s 00:30:57.377 sys 0m9.367s 00:30:57.377 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.377 08:28:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:57.377 ************************************ 00:30:57.377 END TEST nvmf_failover 00:30:57.377 ************************************ 00:30:57.377 08:28:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:57.377 08:28:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:57.377 08:28:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.377 08:28:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.377 ************************************ 00:30:57.377 START TEST nvmf_host_multipath_status 00:30:57.377 ************************************ 00:30:57.377 08:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:57.377 * Looking for test storage... 00:30:57.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:57.377 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:57.377 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:57.377 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:57.639 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:57.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.640 --rc genhtml_branch_coverage=1 00:30:57.640 --rc genhtml_function_coverage=1 00:30:57.640 --rc genhtml_legend=1 00:30:57.640 --rc geninfo_all_blocks=1 00:30:57.640 --rc geninfo_unexecuted_blocks=1 00:30:57.640 00:30:57.640 ' 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:57.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.640 --rc genhtml_branch_coverage=1 00:30:57.640 --rc genhtml_function_coverage=1 00:30:57.640 --rc genhtml_legend=1 00:30:57.640 --rc geninfo_all_blocks=1 00:30:57.640 --rc geninfo_unexecuted_blocks=1 00:30:57.640 00:30:57.640 ' 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:57.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.640 --rc genhtml_branch_coverage=1 00:30:57.640 --rc genhtml_function_coverage=1 00:30:57.640 --rc genhtml_legend=1 00:30:57.640 --rc geninfo_all_blocks=1 00:30:57.640 --rc geninfo_unexecuted_blocks=1 00:30:57.640 00:30:57.640 ' 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:57.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.640 --rc genhtml_branch_coverage=1 00:30:57.640 --rc genhtml_function_coverage=1 00:30:57.640 --rc genhtml_legend=1 00:30:57.640 --rc geninfo_all_blocks=1 00:30:57.640 --rc geninfo_unexecuted_blocks=1 00:30:57.640 00:30:57.640 ' 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@50 -- # : 0 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:30:57.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # remove_target_ns 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:57.640 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:57.641 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:57.641 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:57.641 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # xtrace_disable 00:30:57.641 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # pci_devs=() 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # net_devs=() 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # e810=() 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # local -ga e810 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # x722=() 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # local -ga x722 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # mlx=() 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # local -ga mlx 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:05.783 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:05.783 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:05.783 Found net devices under 0000:31:00.0: cvl_0_0 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:05.783 Found net devices under 0000:31:00.1: cvl_0_1 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # is_hw=yes 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@247 -- # create_target_ns 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:05.783 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@28 -- # local -g _dev 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:05.784 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772161 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:06.046 10.0.0.1 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772162 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:06.046 10.0.0.2 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:06.046 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:06.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:06.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.501 ms 00:31:06.047 00:31:06.047 --- 10.0.0.1 ping statistics --- 00:31:06.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.047 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:06.047 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:31:06.308 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:31:06.308 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:31:06.308 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:31:06.308 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:31:06.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:06.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:31:06.308 00:31:06.308 --- 10.0.0.2 ping statistics --- 00:31:06.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.308 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair++ )) 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # return 0 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator1 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # return 1 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev= 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@160 -- # return 0 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target1 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target1 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # return 1 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev= 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@160 -- # return 0 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # nvmfpid=2132092 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # waitforlisten 2132092 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2132092 ']' 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:06.309 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.310 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:06.310 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:06.310 [2024-11-20 08:28:10.962052] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:31:06.310 [2024-11-20 08:28:10.962117] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.571 [2024-11-20 08:28:11.053177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:06.571 [2024-11-20 08:28:11.095405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:06.571 [2024-11-20 08:28:11.095443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:06.571 [2024-11-20 08:28:11.095452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:06.571 [2024-11-20 08:28:11.095459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:06.571 [2024-11-20 08:28:11.095465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:06.571 [2024-11-20 08:28:11.096919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.571 [2024-11-20 08:28:11.096933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.143 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:07.143 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:31:07.143 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:07.143 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:07.143 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:07.143 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:07.143 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2132092 00:31:07.143 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:07.404 [2024-11-20 08:28:11.949348] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:07.404 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:07.664 Malloc0 00:31:07.664 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:07.664 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:07.925 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:07.925 [2024-11-20 08:28:12.622125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.925 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:08.185 [2024-11-20 08:28:12.778469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:08.185 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2132490 00:31:08.186 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:08.186 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:08.186 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2132490 /var/tmp/bdevperf.sock 00:31:08.186 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2132490 ']' 00:31:08.186 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:08.186 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:08.186 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:08.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:08.186 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:08.186 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:08.447 08:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:08.447 08:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:31:08.447 08:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:08.708 08:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:08.968 Nvme0n1 00:31:08.968 08:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:09.229 Nvme0n1 00:31:09.229 08:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:09.229 08:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:11.775 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:11.775 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:11.775 08:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:11.775 08:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:12.718 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:12.718 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:12.718 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.718 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:12.981 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.981 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:12.981 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.981 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:12.981 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:12.981 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:12.981 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.981 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:13.242 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.242 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:13.242 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.242 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:13.502 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.502 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:13.502 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.502 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:13.764 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.764 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:13.764 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.764 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:13.764 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.764 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:13.764 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:14.024 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:14.284 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:15.239 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:15.239 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:15.239 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.239 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:15.239 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:15.239 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:15.239 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.239 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:15.499 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.499 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:15.499 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.499 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:15.760 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.760 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:15.760 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.760 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:16.020 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.021 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:16.021 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.021 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:16.021 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.021 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:16.021 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.021 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:16.281 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.281 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:16.281 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:16.542 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:16.802 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:17.745 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:17.745 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:17.745 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.745 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:18.007 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.007 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:18.007 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.007 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:18.007 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:18.007 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:18.007 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.007 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:18.268 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.268 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:18.268 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.268 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:18.530 08:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.530 08:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:18.530 08:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.530 08:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:18.530 08:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.530 08:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:18.530 08:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.530 08:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:18.791 08:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.791 08:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:18.791 08:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:19.052 08:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:19.052 08:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:20.437 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:20.437 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:20.437 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.437 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:20.437 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.437 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:20.437 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.437 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:20.437 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:20.437 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:20.437 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.437 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:20.698 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.698 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:20.698 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.698 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:20.958 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.958 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:20.958 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.958 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:20.958 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.958 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:20.958 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.958 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:21.218 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:21.218 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:21.218 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:21.478 08:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:21.478 08:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:22.862 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:22.862 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:22.862 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.862 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:22.862 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:22.862 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:22.863 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.863 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:22.863 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:22.863 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:22.863 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:22.863 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.123 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.123 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:23.123 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.123 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:23.383 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.383 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:23.383 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.383 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:23.645 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:23.645 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:23.645 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.645 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:23.645 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:23.645 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:23.645 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:23.905 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:24.166 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:25.107 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:25.107 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:25.107 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.107 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:25.367 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:25.367 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:25.367 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.367 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:25.367 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.367 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:25.367 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.367 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:25.628 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.628 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:25.628 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.628 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:25.888 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.888 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:25.888 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.888 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:25.888 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:25.888 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:25.888 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.888 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:26.149 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.149 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:26.409 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:26.409 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:26.409 08:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:26.671 08:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:27.612 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:27.612 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:27.612 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.613 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:27.873 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.873 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:27.874 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.874 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:28.134 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.134 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:28.134 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.134 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:28.395 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.395 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:28.395 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.395 08:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:28.395 08:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.395 08:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:28.395 08:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.395 08:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:28.655 08:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.655 08:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:28.655 08:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.655 08:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:28.916 08:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.916 08:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:28.916 08:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:28.916 08:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:29.176 08:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:30.118 08:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:30.118 08:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:30.118 08:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.118 08:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:30.379 08:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:30.380 08:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:30.380 08:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.380 08:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:30.640 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.640 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:30.640 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.640 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:30.640 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.640 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:30.640 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.640 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:30.901 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.901 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:30.901 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.901 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:31.162 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.162 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:31.162 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.162 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:31.162 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.162 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:31.162 08:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:31.423 08:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:31.683 08:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:32.624 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:32.624 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:32.624 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.624 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:32.884 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.884 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:32.884 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.884 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:32.884 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.884 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:33.145 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:33.145 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.145 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.145 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:33.145 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.145 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:33.405 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.405 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:33.405 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.405 08:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:33.666 08:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.666 08:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:33.666 08:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.666 08:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:33.666 08:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.666 08:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:33.666 08:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:33.926 08:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:34.186 08:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:35.128 08:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:35.128 08:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:35.128 08:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.128 08:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:35.388 08:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.388 08:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:35.388 08:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.388 08:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:35.649 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:35.649 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:35.649 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.649 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:35.649 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.649 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:35.649 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.649 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:35.910 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.910 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:35.910 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.910 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:36.170 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.170 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:36.171 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.171 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:36.171 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:36.171 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2132490 00:31:36.171 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2132490 ']' 00:31:36.171 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2132490 00:31:36.171 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:31:36.171 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:36.171 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2132490 00:31:36.436 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:31:36.436 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:31:36.436 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2132490' 00:31:36.436 killing process with pid 2132490 00:31:36.436 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2132490 00:31:36.436 08:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2132490 00:31:36.436 { 00:31:36.436 "results": [ 00:31:36.436 { 00:31:36.436 "job": "Nvme0n1", 00:31:36.436 "core_mask": "0x4", 00:31:36.436 "workload": "verify", 00:31:36.436 "status": "terminated", 00:31:36.436 "verify_range": { 00:31:36.436 "start": 0, 00:31:36.436 "length": 16384 00:31:36.436 }, 00:31:36.436 "queue_depth": 128, 00:31:36.436 "io_size": 4096, 00:31:36.436 "runtime": 26.876444, 00:31:36.436 "iops": 10830.636672024022, 00:31:36.436 "mibps": 42.307174500093836, 00:31:36.436 "io_failed": 0, 00:31:36.436 "io_timeout": 0, 00:31:36.436 "avg_latency_us": 11801.62583255751, 00:31:36.436 "min_latency_us": 283.3066666666667, 00:31:36.436 "max_latency_us": 3019898.88 00:31:36.436 } 00:31:36.436 ], 00:31:36.436 "core_count": 1 00:31:36.436 } 00:31:36.436 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2132490 00:31:36.436 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:36.436 [2024-11-20 08:28:12.845265] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:31:36.436 [2024-11-20 08:28:12.845326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2132490 ] 00:31:36.436 [2024-11-20 08:28:12.909939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.436 [2024-11-20 08:28:12.938792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:36.436 Running I/O for 90 seconds... 00:31:36.436 9550.00 IOPS, 37.30 MiB/s [2024-11-20T07:28:41.165Z] 9610.50 IOPS, 37.54 MiB/s [2024-11-20T07:28:41.165Z] 9629.67 IOPS, 37.62 MiB/s [2024-11-20T07:28:41.165Z] 9639.00 IOPS, 37.65 MiB/s [2024-11-20T07:28:41.165Z] 9893.20 IOPS, 38.65 MiB/s [2024-11-20T07:28:41.165Z] 10409.00 IOPS, 40.66 MiB/s [2024-11-20T07:28:41.165Z] 10787.71 IOPS, 42.14 MiB/s [2024-11-20T07:28:41.165Z] 10764.38 IOPS, 42.05 MiB/s [2024-11-20T07:28:41.165Z] 10643.56 IOPS, 41.58 MiB/s [2024-11-20T07:28:41.165Z] 10544.90 IOPS, 41.19 MiB/s [2024-11-20T07:28:41.165Z] 10467.82 IOPS, 40.89 MiB/s [2024-11-20T07:28:41.165Z] [2024-11-20 08:28:25.980573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.436 [2024-11-20 08:28:25.980607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:36.436 [2024-11-20 08:28:25.980641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.436 [2024-11-20 08:28:25.980648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:36.436 [2024-11-20 08:28:25.980659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.436 [2024-11-20 08:28:25.980664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:36.436 [2024-11-20 08:28:25.980675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.436 [2024-11-20 08:28:25.980680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:36.436 [2024-11-20 08:28:25.980690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.436 [2024-11-20 08:28:25.980695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:36.436 [2024-11-20 08:28:25.980706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.436 [2024-11-20 08:28:25.980711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:36.436 [2024-11-20 08:28:25.980722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.436 [2024-11-20 08:28:25.980727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:36.436 [2024-11-20 08:28:25.980737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.436 [2024-11-20 08:28:25.980742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:36.436 [2024-11-20 08:28:25.980753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.436 [2024-11-20 08:28:25.980758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:36.436 [2024-11-20 08:28:25.980768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.436 [2024-11-20 08:28:25.980779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:36.436 [2024-11-20 08:28:25.980790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.436 [2024-11-20 08:28:25.980795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:36.436 [2024-11-20 08:28:25.980805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.436 [2024-11-20 08:28:25.980810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:36.436 [2024-11-20 08:28:25.980821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.436 [2024-11-20 08:28:25.980826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:36.436 [2024-11-20 08:28:25.980836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.436 [2024-11-20 08:28:25.980842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.980852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.980857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.980871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.980877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:36.437 [2024-11-20 08:28:25.981848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.437 [2024-11-20 08:28:25.981854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.981870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.981875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.981887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.981892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.981903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.981908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.981920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.981925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.981936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.981942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.981953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.981958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.981969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.981976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.981988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.981992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.438 [2024-11-20 08:28:25.982460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.438 [2024-11-20 08:28:25.982479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.438 [2024-11-20 08:28:25.982500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.438 [2024-11-20 08:28:25.982519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.438 [2024-11-20 08:28:25.982538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.438 [2024-11-20 08:28:25.982558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.438 [2024-11-20 08:28:25.982578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:36.438 [2024-11-20 08:28:25.982610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.438 [2024-11-20 08:28:25.982615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.982989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.982994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:36.439 [2024-11-20 08:28:25.983745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.439 [2024-11-20 08:28:25.983751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:36.439 10344.75 IOPS, 40.41 MiB/s [2024-11-20T07:28:41.169Z] 9549.00 IOPS, 37.30 MiB/s [2024-11-20T07:28:41.169Z] 8866.93 IOPS, 34.64 MiB/s [2024-11-20T07:28:41.169Z] 8339.40 IOPS, 32.58 MiB/s [2024-11-20T07:28:41.169Z] 8623.44 IOPS, 33.69 MiB/s [2024-11-20T07:28:41.169Z] 8875.35 IOPS, 34.67 MiB/s [2024-11-20T07:28:41.169Z] 9306.39 IOPS, 36.35 MiB/s [2024-11-20T07:28:41.169Z] 9711.74 IOPS, 37.94 MiB/s [2024-11-20T07:28:41.169Z] 9988.85 IOPS, 39.02 MiB/s [2024-11-20T07:28:41.169Z] 10132.29 IOPS, 39.58 MiB/s [2024-11-20T07:28:41.169Z] 10259.09 IOPS, 40.07 MiB/s [2024-11-20T07:28:41.169Z] 10512.22 IOPS, 41.06 MiB/s [2024-11-20T07:28:41.169Z] 10783.17 IOPS, 42.12 MiB/s [2024-11-20T07:28:41.169Z] [2024-11-20 08:28:38.702191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.440 [2024-11-20 08:28:38.702232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.440 [2024-11-20 08:28:38.702632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.440 [2024-11-20 08:28:38.702651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.440 [2024-11-20 08:28:38.702672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.440 [2024-11-20 08:28:38.702689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.440 [2024-11-20 08:28:38.702705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.440 [2024-11-20 08:28:38.702721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.440 [2024-11-20 08:28:38.702737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.440 [2024-11-20 08:28:38.702753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.440 [2024-11-20 08:28:38.702769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.440 [2024-11-20 08:28:38.702784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.440 [2024-11-20 08:28:38.702800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.440 [2024-11-20 08:28:38.702815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.440 [2024-11-20 08:28:38.702831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.440 [2024-11-20 08:28:38.702956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.440 [2024-11-20 08:28:38.702974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.702984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.440 [2024-11-20 08:28:38.702990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.703000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.440 [2024-11-20 08:28:38.703005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.703015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.440 [2024-11-20 08:28:38.703020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:36.440 [2024-11-20 08:28:38.703031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:36.440 [2024-11-20 08:28:38.703036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:36.440 10917.56 IOPS, 42.65 MiB/s [2024-11-20T07:28:41.169Z] 10874.62 IOPS, 42.48 MiB/s [2024-11-20T07:28:41.169Z] Received shutdown signal, test time was about 26.877051 seconds 00:31:36.440 00:31:36.440 Latency(us) 00:31:36.440 [2024-11-20T07:28:41.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.440 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:36.440 Verification LBA range: start 0x0 length 0x4000 00:31:36.440 Nvme0n1 : 26.88 10830.64 42.31 0.00 0.00 11801.63 283.31 3019898.88 00:31:36.440 [2024-11-20T07:28:41.169Z] =================================================================================================================== 00:31:36.440 [2024-11-20T07:28:41.169Z] Total : 10830.64 42.31 0.00 0.00 11801.63 283.31 3019898.88 00:31:36.440 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@99 -- # sync 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # set +e 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:36.702 rmmod nvme_tcp 00:31:36.702 rmmod nvme_fabrics 00:31:36.702 rmmod nvme_keyring 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # set -e 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # return 0 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # '[' -n 2132092 ']' 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@337 -- # killprocess 2132092 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2132092 ']' 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2132092 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2132092 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2132092' 00:31:36.702 killing process with pid 2132092 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2132092 00:31:36.702 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2132092 00:31:36.963 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:36.963 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # nvmf_fini 00:31:36.963 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@254 -- # local dev 00:31:36.963 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@257 -- # remove_target_ns 00:31:36.963 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:36.963 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:36.963 08:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@258 -- # delete_main_bridge 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # return 0 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:31:38.989 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # _dev=0 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # dev_map=() 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@274 -- # iptr 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-save 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-restore 00:31:38.990 00:31:38.990 real 0m41.677s 00:31:38.990 user 1m44.591s 00:31:38.990 sys 0m12.427s 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:38.990 ************************************ 00:31:38.990 END TEST nvmf_host_multipath_status 00:31:38.990 ************************************ 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.990 ************************************ 00:31:38.990 START TEST nvmf_discovery_remove_ifc 00:31:38.990 ************************************ 00:31:38.990 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:39.252 * Looking for test storage... 00:31:39.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:31:39.252 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:39.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.253 --rc genhtml_branch_coverage=1 00:31:39.253 --rc genhtml_function_coverage=1 00:31:39.253 --rc genhtml_legend=1 00:31:39.253 --rc geninfo_all_blocks=1 00:31:39.253 --rc geninfo_unexecuted_blocks=1 00:31:39.253 00:31:39.253 ' 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:39.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.253 --rc genhtml_branch_coverage=1 00:31:39.253 --rc genhtml_function_coverage=1 00:31:39.253 --rc genhtml_legend=1 00:31:39.253 --rc geninfo_all_blocks=1 00:31:39.253 --rc geninfo_unexecuted_blocks=1 00:31:39.253 00:31:39.253 ' 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:39.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.253 --rc genhtml_branch_coverage=1 00:31:39.253 --rc genhtml_function_coverage=1 00:31:39.253 --rc genhtml_legend=1 00:31:39.253 --rc geninfo_all_blocks=1 00:31:39.253 --rc geninfo_unexecuted_blocks=1 00:31:39.253 00:31:39.253 ' 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:39.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.253 --rc genhtml_branch_coverage=1 00:31:39.253 --rc genhtml_function_coverage=1 00:31:39.253 --rc genhtml_legend=1 00:31:39.253 --rc geninfo_all_blocks=1 00:31:39.253 --rc geninfo_unexecuted_blocks=1 00:31:39.253 00:31:39.253 ' 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:31:39.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # discovery_port=8009 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@18 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@21 -- # host_sock=/tmp/host.sock 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # nvmftestinit 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # remove_target_ns 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # xtrace_disable 00:31:39.253 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # pci_devs=() 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # net_devs=() 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # e810=() 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # local -ga e810 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # x722=() 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # local -ga x722 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # mlx=() 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # local -ga mlx 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:47.396 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:47.397 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:47.397 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:47.397 Found net devices under 0000:31:00.0: cvl_0_0 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:47.397 Found net devices under 0000:31:00.1: cvl_0_1 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # is_hw=yes 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@247 -- # create_target_ns 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:47.397 08:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:47.397 10.0.0.1 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:47.397 10.0.0.2 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:31:47.397 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:47.398 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:31:47.398 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:31:47.398 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:47.398 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:47.398 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:47.398 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:47.398 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:47.398 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:47.659 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:31:47.659 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:47.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:47.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.495 ms 00:31:47.660 00:31:47.660 --- 10.0.0.1 ping statistics --- 00:31:47.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.660 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:31:47.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:47.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:31:47.660 00:31:47.660 --- 10.0.0.2 ping statistics --- 00:31:47.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.660 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # return 0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # return 1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev= 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@160 -- # return 0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:47.660 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target1 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # return 1 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev= 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@160 -- # return 0 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:47.661 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:47.922 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@35 -- # nvmfappstart -m 0x2 00:31:47.922 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:47.922 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:47.922 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.922 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # nvmfpid=2143031 00:31:47.922 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # waitforlisten 2143031 00:31:47.922 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:47.922 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2143031 ']' 00:31:47.922 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.922 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:47.922 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.922 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:47.922 08:28:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.922 [2024-11-20 08:28:52.469774] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:31:47.922 [2024-11-20 08:28:52.469843] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:47.922 [2024-11-20 08:28:52.578606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.922 [2024-11-20 08:28:52.628733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:47.922 [2024-11-20 08:28:52.628795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:47.922 [2024-11-20 08:28:52.628803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:47.922 [2024-11-20 08:28:52.628811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:47.922 [2024-11-20 08:28:52.628817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:47.922 [2024-11-20 08:28:52.629604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@38 -- # rpc_cmd 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:48.866 [2024-11-20 08:28:53.330250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.866 [2024-11-20 08:28:53.338506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:48.866 null0 00:31:48.866 [2024-11-20 08:28:53.370450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@54 -- # hostpid=2143073 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@55 -- # waitforlisten 2143073 /tmp/host.sock 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2143073 ']' 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:48.866 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.866 08:28:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:48.866 [2024-11-20 08:28:53.446306] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:31:48.866 [2024-11-20 08:28:53.446369] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143073 ] 00:31:48.866 [2024-11-20 08:28:53.529496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.866 [2024-11-20 08:28:53.571906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.809 08:28:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.809 08:28:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:31:49.809 08:28:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@57 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:49.809 08:28:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:49.809 08:28:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.809 08:28:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:49.809 08:28:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.809 08:28:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@61 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:49.809 08:28:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.809 08:28:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:49.809 08:28:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.809 08:28:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:49.809 08:28:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.809 08:28:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:50.751 [2024-11-20 08:28:55.339632] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:50.751 [2024-11-20 08:28:55.339652] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:50.751 [2024-11-20 08:28:55.339666] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:50.751 [2024-11-20 08:28:55.469103] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:51.011 [2024-11-20 08:28:55.652270] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:31:51.011 [2024-11-20 08:28:55.653340] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a7f670:1 started. 00:31:51.011 [2024-11-20 08:28:55.654993] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:51.011 [2024-11-20 08:28:55.655038] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:51.011 [2024-11-20 08:28:55.655061] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:51.011 [2024-11-20 08:28:55.655075] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:51.011 [2024-11-20 08:28:55.655095] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:51.011 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.011 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@67 -- # wait_for_bdev nvme0n1 00:31:51.011 [2024-11-20 08:28:55.658292] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a7f670 was disconnected and freed. delete nvme_qpair. 00:31:51.011 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:51.011 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:51.011 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:51.011 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.011 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:51.012 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:51.012 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:51.012 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.012 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:51.012 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@70 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_1 00:31:51.012 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@71 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 down 00:31:51.272 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@74 -- # wait_for_bdev '' 00:31:51.272 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:51.272 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:51.272 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:51.272 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:51.272 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:51.272 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.272 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:51.272 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.272 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:31:51.272 08:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:52.214 08:28:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:52.214 08:28:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:52.214 08:28:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:52.214 08:28:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.214 08:28:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:52.214 08:28:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:52.214 08:28:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:52.214 08:28:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.475 08:28:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:31:52.476 08:28:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:53.417 08:28:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:53.417 08:28:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:53.417 08:28:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:53.417 08:28:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.417 08:28:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:53.417 08:28:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:53.417 08:28:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:53.417 08:28:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.417 08:28:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:31:53.417 08:28:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:54.361 08:28:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:54.361 08:28:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:54.361 08:28:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:54.361 08:28:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.361 08:28:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:54.361 08:28:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.361 08:28:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:54.361 08:28:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.361 08:28:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:31:54.361 08:28:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:55.752 08:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:55.752 08:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:55.752 08:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:55.752 08:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.752 08:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:55.752 08:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:55.752 08:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:55.752 08:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.752 08:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:31:55.752 08:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:56.693 [2024-11-20 08:29:01.095728] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:56.693 [2024-11-20 08:29:01.095772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.693 [2024-11-20 08:29:01.095785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.693 [2024-11-20 08:29:01.095795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.693 [2024-11-20 08:29:01.095802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.693 [2024-11-20 08:29:01.095810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.693 [2024-11-20 08:29:01.095818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.693 [2024-11-20 08:29:01.095825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.693 [2024-11-20 08:29:01.095833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.693 [2024-11-20 08:29:01.095841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.693 [2024-11-20 08:29:01.095849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.693 [2024-11-20 08:29:01.095856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c050 is same with the state(6) to be set 00:31:56.693 [2024-11-20 08:29:01.105750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5c050 (9): Bad file descriptor 00:31:56.693 [2024-11-20 08:29:01.115786] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:56.693 [2024-11-20 08:29:01.115799] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:56.693 [2024-11-20 08:29:01.115804] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:56.693 [2024-11-20 08:29:01.115810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:56.693 [2024-11-20 08:29:01.115830] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:56.693 08:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:56.694 08:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.694 08:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:56.694 08:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.694 08:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:56.694 08:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.694 08:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:57.636 [2024-11-20 08:29:02.171894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:57.636 [2024-11-20 08:29:02.171933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5c050 with addr=10.0.0.2, port=4420 00:31:57.636 [2024-11-20 08:29:02.171944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c050 is same with the state(6) to be set 00:31:57.636 [2024-11-20 08:29:02.171965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5c050 (9): Bad file descriptor 00:31:57.636 [2024-11-20 08:29:02.172334] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:31:57.636 [2024-11-20 08:29:02.172357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:57.636 [2024-11-20 08:29:02.172366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:57.636 [2024-11-20 08:29:02.172375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:57.636 [2024-11-20 08:29:02.172383] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:57.636 [2024-11-20 08:29:02.172389] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:57.636 [2024-11-20 08:29:02.172394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:57.636 [2024-11-20 08:29:02.172402] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:57.636 [2024-11-20 08:29:02.172407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:57.636 08:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.636 08:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:31:57.636 08:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:58.578 [2024-11-20 08:29:03.174778] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:58.578 [2024-11-20 08:29:03.174798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:58.578 [2024-11-20 08:29:03.174814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:58.578 [2024-11-20 08:29:03.174822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:58.578 [2024-11-20 08:29:03.174829] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:31:58.578 [2024-11-20 08:29:03.174836] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:58.578 [2024-11-20 08:29:03.174842] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:58.578 [2024-11-20 08:29:03.174846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:58.578 [2024-11-20 08:29:03.174870] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:58.578 [2024-11-20 08:29:03.174893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.578 [2024-11-20 08:29:03.174902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.578 [2024-11-20 08:29:03.174912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.578 [2024-11-20 08:29:03.174919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.578 [2024-11-20 08:29:03.174928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.578 [2024-11-20 08:29:03.174935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.578 [2024-11-20 08:29:03.174943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.578 [2024-11-20 08:29:03.174950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.578 [2024-11-20 08:29:03.174959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.578 [2024-11-20 08:29:03.174966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.578 [2024-11-20 08:29:03.174974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:31:58.578 [2024-11-20 08:29:03.175364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4b380 (9): Bad file descriptor 00:31:58.578 [2024-11-20 08:29:03.176377] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:58.578 [2024-11-20 08:29:03.176389] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:31:58.578 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:58.578 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.578 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:58.578 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.578 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:58.578 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.578 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:58.578 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.578 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != '' ]] 00:31:58.578 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@77 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:58.578 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@78 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:58.839 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@81 -- # wait_for_bdev nvme1n1 00:31:58.839 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:58.839 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:58.839 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.839 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:58.839 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.839 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:58.839 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.839 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.839 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:58.839 08:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:59.781 08:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:59.781 08:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.781 08:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:59.781 08:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.781 08:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.781 08:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:59.781 08:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:59.781 08:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.781 08:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:59.781 08:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:32:00.722 [2024-11-20 08:29:05.191944] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:00.722 [2024-11-20 08:29:05.191961] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:00.722 [2024-11-20 08:29:05.191974] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:00.722 [2024-11-20 08:29:05.318375] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:00.982 08:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:32:00.982 08:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:00.982 08:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:32:00.982 08:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.982 08:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:32:00.982 08:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:00.982 08:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:32:00.982 08:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.982 [2024-11-20 08:29:05.493499] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:32:00.982 [2024-11-20 08:29:05.494369] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1a57450:1 started. 00:32:00.982 [2024-11-20 08:29:05.495648] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:00.982 [2024-11-20 08:29:05.495683] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:00.982 [2024-11-20 08:29:05.495704] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:00.982 [2024-11-20 08:29:05.495718] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:00.982 [2024-11-20 08:29:05.495726] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:00.982 [2024-11-20 08:29:05.502527] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1a57450 was disconnected and freed. delete nvme_qpair. 00:32:00.982 08:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:00.982 08:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@85 -- # killprocess 2143073 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2143073 ']' 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2143073 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:01.922 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2143073 00:32:01.923 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:01.923 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:01.923 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2143073' 00:32:01.923 killing process with pid 2143073 00:32:01.923 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2143073 00:32:01.923 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2143073 00:32:02.182 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # nvmftestfini 00:32:02.182 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:02.182 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:32:02.182 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:02.182 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:32:02.182 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:02.182 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:02.182 rmmod nvme_tcp 00:32:02.182 rmmod nvme_fabrics 00:32:02.182 rmmod nvme_keyring 00:32:02.183 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:02.183 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:32:02.183 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:32:02.183 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # '[' -n 2143031 ']' 00:32:02.183 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # killprocess 2143031 00:32:02.183 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2143031 ']' 00:32:02.183 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2143031 00:32:02.183 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:32:02.183 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:02.183 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2143031 00:32:02.183 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:02.183 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:02.183 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2143031' 00:32:02.183 killing process with pid 2143031 00:32:02.183 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2143031 00:32:02.183 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2143031 00:32:02.443 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:02.443 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # nvmf_fini 00:32:02.443 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@254 -- # local dev 00:32:02.443 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@257 -- # remove_target_ns 00:32:02.443 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:02.443 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:02.443 08:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@258 -- # delete_main_bridge 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # return 0 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@274 -- # iptr 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-save 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-restore 00:32:04.356 00:32:04.356 real 0m25.391s 00:32:04.356 user 0m29.702s 00:32:04.356 sys 0m7.817s 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:04.356 08:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.356 ************************************ 00:32:04.356 END TEST nvmf_discovery_remove_ifc 00:32:04.356 ************************************ 00:32:04.616 08:29:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:04.616 08:29:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:04.616 08:29:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:04.616 08:29:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.616 ************************************ 00:32:04.616 START TEST nvmf_identify_kernel_target 00:32:04.616 ************************************ 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:04.617 * Looking for test storage... 00:32:04.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:04.617 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:04.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.879 --rc genhtml_branch_coverage=1 00:32:04.879 --rc genhtml_function_coverage=1 00:32:04.879 --rc genhtml_legend=1 00:32:04.879 --rc geninfo_all_blocks=1 00:32:04.879 --rc geninfo_unexecuted_blocks=1 00:32:04.879 00:32:04.879 ' 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:04.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.879 --rc genhtml_branch_coverage=1 00:32:04.879 --rc genhtml_function_coverage=1 00:32:04.879 --rc genhtml_legend=1 00:32:04.879 --rc geninfo_all_blocks=1 00:32:04.879 --rc geninfo_unexecuted_blocks=1 00:32:04.879 00:32:04.879 ' 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:04.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.879 --rc genhtml_branch_coverage=1 00:32:04.879 --rc genhtml_function_coverage=1 00:32:04.879 --rc genhtml_legend=1 00:32:04.879 --rc geninfo_all_blocks=1 00:32:04.879 --rc geninfo_unexecuted_blocks=1 00:32:04.879 00:32:04.879 ' 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:04.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.879 --rc genhtml_branch_coverage=1 00:32:04.879 --rc genhtml_function_coverage=1 00:32:04.879 --rc genhtml_legend=1 00:32:04.879 --rc geninfo_all_blocks=1 00:32:04.879 --rc geninfo_unexecuted_blocks=1 00:32:04.879 00:32:04.879 ' 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.879 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@50 -- # : 0 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:32:04.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # remove_target_ns 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # xtrace_disable 00:32:04.880 08:29:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # pci_devs=() 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # net_devs=() 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # e810=() 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # local -ga e810 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # x722=() 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # local -ga x722 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # mlx=() 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # local -ga mlx 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:13.020 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:13.020 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.020 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:13.021 Found net devices under 0000:31:00.0: cvl_0_0 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:13.021 Found net devices under 0000:31:00.1: cvl_0_1 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # is_hw=yes 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@247 -- # create_target_ns 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@28 -- # local -g _dev 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772161 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:13.021 10.0.0.1 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772162 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:13.021 10.0.0.2 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:13.021 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:13.022 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:13.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:13.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.506 ms 00:32:13.284 00:32:13.284 --- 10.0.0.1 ping statistics --- 00:32:13.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.284 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:32:13.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:13.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:32:13.284 00:32:13.284 --- 10.0.0.2 ping statistics --- 00:32:13.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.284 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # return 0 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:13.284 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # return 1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev= 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@160 -- # return 0 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # return 1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev= 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@160 -- # return 0 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # local block nvme 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # modprobe nvmet 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:13.285 08:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:17.492 Waiting for block devices as requested 00:32:17.492 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:17.492 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:17.492 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:17.492 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:17.492 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:17.492 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:17.492 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:17.492 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:17.492 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:17.752 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:17.752 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:17.752 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:18.014 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:18.014 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:18.014 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:18.014 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:18.275 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:18.536 No valid GPT data, bailing 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # echo 1 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@471 -- # echo 1 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # echo tcp 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@475 -- # echo 4420 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # echo ipv4 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:32:18.536 00:32:18.536 Discovery Log Number of Records 2, Generation counter 2 00:32:18.536 =====Discovery Log Entry 0====== 00:32:18.536 trtype: tcp 00:32:18.536 adrfam: ipv4 00:32:18.536 subtype: current discovery subsystem 00:32:18.536 treq: not specified, sq flow control disable supported 00:32:18.536 portid: 1 00:32:18.536 trsvcid: 4420 00:32:18.536 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:18.536 traddr: 10.0.0.1 00:32:18.536 eflags: none 00:32:18.536 sectype: none 00:32:18.536 =====Discovery Log Entry 1====== 00:32:18.536 trtype: tcp 00:32:18.536 adrfam: ipv4 00:32:18.536 subtype: nvme subsystem 00:32:18.536 treq: not specified, sq flow control disable supported 00:32:18.536 portid: 1 00:32:18.536 trsvcid: 4420 00:32:18.536 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:18.536 traddr: 10.0.0.1 00:32:18.536 eflags: none 00:32:18.536 sectype: none 00:32:18.536 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:18.536 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:18.798 ===================================================== 00:32:18.798 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:18.798 ===================================================== 00:32:18.798 Controller Capabilities/Features 00:32:18.798 ================================ 00:32:18.798 Vendor ID: 0000 00:32:18.798 Subsystem Vendor ID: 0000 00:32:18.798 Serial Number: 86f846a043b1b6ae5024 00:32:18.798 Model Number: Linux 00:32:18.798 Firmware Version: 6.8.9-20 00:32:18.798 Recommended Arb Burst: 0 00:32:18.798 IEEE OUI Identifier: 00 00 00 00:32:18.798 Multi-path I/O 00:32:18.798 May have multiple subsystem ports: No 00:32:18.798 May have multiple controllers: No 00:32:18.798 Associated with SR-IOV VF: No 00:32:18.798 Max Data Transfer Size: Unlimited 00:32:18.798 Max Number of Namespaces: 0 00:32:18.798 Max Number of I/O Queues: 1024 00:32:18.798 NVMe Specification Version (VS): 1.3 00:32:18.798 NVMe Specification Version (Identify): 1.3 00:32:18.798 Maximum Queue Entries: 1024 00:32:18.798 Contiguous Queues Required: No 00:32:18.798 Arbitration Mechanisms Supported 00:32:18.798 Weighted Round Robin: Not Supported 00:32:18.798 Vendor Specific: Not Supported 00:32:18.798 Reset Timeout: 7500 ms 00:32:18.798 Doorbell Stride: 4 bytes 00:32:18.798 NVM Subsystem Reset: Not Supported 00:32:18.798 Command Sets Supported 00:32:18.798 NVM Command Set: Supported 00:32:18.798 Boot Partition: Not Supported 00:32:18.798 Memory Page Size Minimum: 4096 bytes 00:32:18.798 Memory Page Size Maximum: 4096 bytes 00:32:18.798 Persistent Memory Region: Not Supported 00:32:18.798 Optional Asynchronous Events Supported 00:32:18.798 Namespace Attribute Notices: Not Supported 00:32:18.798 Firmware Activation Notices: Not Supported 00:32:18.798 ANA Change Notices: Not Supported 00:32:18.798 PLE Aggregate Log Change Notices: Not Supported 00:32:18.798 LBA Status Info Alert Notices: Not Supported 00:32:18.798 EGE Aggregate Log Change Notices: Not Supported 00:32:18.798 Normal NVM Subsystem Shutdown event: Not Supported 00:32:18.798 Zone Descriptor Change Notices: Not Supported 00:32:18.798 Discovery Log Change Notices: Supported 00:32:18.798 Controller Attributes 00:32:18.798 128-bit Host Identifier: Not Supported 00:32:18.798 Non-Operational Permissive Mode: Not Supported 00:32:18.798 NVM Sets: Not Supported 00:32:18.798 Read Recovery Levels: Not Supported 00:32:18.798 Endurance Groups: Not Supported 00:32:18.798 Predictable Latency Mode: Not Supported 00:32:18.798 Traffic Based Keep ALive: Not Supported 00:32:18.798 Namespace Granularity: Not Supported 00:32:18.798 SQ Associations: Not Supported 00:32:18.798 UUID List: Not Supported 00:32:18.798 Multi-Domain Subsystem: Not Supported 00:32:18.798 Fixed Capacity Management: Not Supported 00:32:18.798 Variable Capacity Management: Not Supported 00:32:18.798 Delete Endurance Group: Not Supported 00:32:18.798 Delete NVM Set: Not Supported 00:32:18.798 Extended LBA Formats Supported: Not Supported 00:32:18.798 Flexible Data Placement Supported: Not Supported 00:32:18.798 00:32:18.798 Controller Memory Buffer Support 00:32:18.798 ================================ 00:32:18.798 Supported: No 00:32:18.798 00:32:18.798 Persistent Memory Region Support 00:32:18.798 ================================ 00:32:18.798 Supported: No 00:32:18.798 00:32:18.798 Admin Command Set Attributes 00:32:18.798 ============================ 00:32:18.798 Security Send/Receive: Not Supported 00:32:18.798 Format NVM: Not Supported 00:32:18.798 Firmware Activate/Download: Not Supported 00:32:18.798 Namespace Management: Not Supported 00:32:18.798 Device Self-Test: Not Supported 00:32:18.798 Directives: Not Supported 00:32:18.798 NVMe-MI: Not Supported 00:32:18.798 Virtualization Management: Not Supported 00:32:18.798 Doorbell Buffer Config: Not Supported 00:32:18.798 Get LBA Status Capability: Not Supported 00:32:18.798 Command & Feature Lockdown Capability: Not Supported 00:32:18.798 Abort Command Limit: 1 00:32:18.798 Async Event Request Limit: 1 00:32:18.798 Number of Firmware Slots: N/A 00:32:18.798 Firmware Slot 1 Read-Only: N/A 00:32:18.798 Firmware Activation Without Reset: N/A 00:32:18.798 Multiple Update Detection Support: N/A 00:32:18.798 Firmware Update Granularity: No Information Provided 00:32:18.798 Per-Namespace SMART Log: No 00:32:18.798 Asymmetric Namespace Access Log Page: Not Supported 00:32:18.798 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:18.798 Command Effects Log Page: Not Supported 00:32:18.798 Get Log Page Extended Data: Supported 00:32:18.798 Telemetry Log Pages: Not Supported 00:32:18.798 Persistent Event Log Pages: Not Supported 00:32:18.798 Supported Log Pages Log Page: May Support 00:32:18.798 Commands Supported & Effects Log Page: Not Supported 00:32:18.799 Feature Identifiers & Effects Log Page:May Support 00:32:18.799 NVMe-MI Commands & Effects Log Page: May Support 00:32:18.799 Data Area 4 for Telemetry Log: Not Supported 00:32:18.799 Error Log Page Entries Supported: 1 00:32:18.799 Keep Alive: Not Supported 00:32:18.799 00:32:18.799 NVM Command Set Attributes 00:32:18.799 ========================== 00:32:18.799 Submission Queue Entry Size 00:32:18.799 Max: 1 00:32:18.799 Min: 1 00:32:18.799 Completion Queue Entry Size 00:32:18.799 Max: 1 00:32:18.799 Min: 1 00:32:18.799 Number of Namespaces: 0 00:32:18.799 Compare Command: Not Supported 00:32:18.799 Write Uncorrectable Command: Not Supported 00:32:18.799 Dataset Management Command: Not Supported 00:32:18.799 Write Zeroes Command: Not Supported 00:32:18.799 Set Features Save Field: Not Supported 00:32:18.799 Reservations: Not Supported 00:32:18.799 Timestamp: Not Supported 00:32:18.799 Copy: Not Supported 00:32:18.799 Volatile Write Cache: Not Present 00:32:18.799 Atomic Write Unit (Normal): 1 00:32:18.799 Atomic Write Unit (PFail): 1 00:32:18.799 Atomic Compare & Write Unit: 1 00:32:18.799 Fused Compare & Write: Not Supported 00:32:18.799 Scatter-Gather List 00:32:18.799 SGL Command Set: Supported 00:32:18.799 SGL Keyed: Not Supported 00:32:18.799 SGL Bit Bucket Descriptor: Not Supported 00:32:18.799 SGL Metadata Pointer: Not Supported 00:32:18.799 Oversized SGL: Not Supported 00:32:18.799 SGL Metadata Address: Not Supported 00:32:18.799 SGL Offset: Supported 00:32:18.799 Transport SGL Data Block: Not Supported 00:32:18.799 Replay Protected Memory Block: Not Supported 00:32:18.799 00:32:18.799 Firmware Slot Information 00:32:18.799 ========================= 00:32:18.799 Active slot: 0 00:32:18.799 00:32:18.799 00:32:18.799 Error Log 00:32:18.799 ========= 00:32:18.799 00:32:18.799 Active Namespaces 00:32:18.799 ================= 00:32:18.799 Discovery Log Page 00:32:18.799 ================== 00:32:18.799 Generation Counter: 2 00:32:18.799 Number of Records: 2 00:32:18.799 Record Format: 0 00:32:18.799 00:32:18.799 Discovery Log Entry 0 00:32:18.799 ---------------------- 00:32:18.799 Transport Type: 3 (TCP) 00:32:18.799 Address Family: 1 (IPv4) 00:32:18.799 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:18.799 Entry Flags: 00:32:18.799 Duplicate Returned Information: 0 00:32:18.799 Explicit Persistent Connection Support for Discovery: 0 00:32:18.799 Transport Requirements: 00:32:18.799 Secure Channel: Not Specified 00:32:18.799 Port ID: 1 (0x0001) 00:32:18.799 Controller ID: 65535 (0xffff) 00:32:18.799 Admin Max SQ Size: 32 00:32:18.799 Transport Service Identifier: 4420 00:32:18.799 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:18.799 Transport Address: 10.0.0.1 00:32:18.799 Discovery Log Entry 1 00:32:18.799 ---------------------- 00:32:18.799 Transport Type: 3 (TCP) 00:32:18.799 Address Family: 1 (IPv4) 00:32:18.799 Subsystem Type: 2 (NVM Subsystem) 00:32:18.799 Entry Flags: 00:32:18.799 Duplicate Returned Information: 0 00:32:18.799 Explicit Persistent Connection Support for Discovery: 0 00:32:18.799 Transport Requirements: 00:32:18.799 Secure Channel: Not Specified 00:32:18.799 Port ID: 1 (0x0001) 00:32:18.799 Controller ID: 65535 (0xffff) 00:32:18.799 Admin Max SQ Size: 32 00:32:18.799 Transport Service Identifier: 4420 00:32:18.799 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:18.799 Transport Address: 10.0.0.1 00:32:18.799 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:18.799 get_feature(0x01) failed 00:32:18.799 get_feature(0x02) failed 00:32:18.799 get_feature(0x04) failed 00:32:18.799 ===================================================== 00:32:18.799 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:18.799 ===================================================== 00:32:18.799 Controller Capabilities/Features 00:32:18.799 ================================ 00:32:18.799 Vendor ID: 0000 00:32:18.799 Subsystem Vendor ID: 0000 00:32:18.799 Serial Number: cd8b3bf8a8e95ed2c309 00:32:18.799 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:18.799 Firmware Version: 6.8.9-20 00:32:18.799 Recommended Arb Burst: 6 00:32:18.799 IEEE OUI Identifier: 00 00 00 00:32:18.799 Multi-path I/O 00:32:18.799 May have multiple subsystem ports: Yes 00:32:18.799 May have multiple controllers: Yes 00:32:18.799 Associated with SR-IOV VF: No 00:32:18.799 Max Data Transfer Size: Unlimited 00:32:18.799 Max Number of Namespaces: 1024 00:32:18.799 Max Number of I/O Queues: 128 00:32:18.799 NVMe Specification Version (VS): 1.3 00:32:18.799 NVMe Specification Version (Identify): 1.3 00:32:18.799 Maximum Queue Entries: 1024 00:32:18.799 Contiguous Queues Required: No 00:32:18.799 Arbitration Mechanisms Supported 00:32:18.799 Weighted Round Robin: Not Supported 00:32:18.799 Vendor Specific: Not Supported 00:32:18.799 Reset Timeout: 7500 ms 00:32:18.799 Doorbell Stride: 4 bytes 00:32:18.799 NVM Subsystem Reset: Not Supported 00:32:18.799 Command Sets Supported 00:32:18.799 NVM Command Set: Supported 00:32:18.799 Boot Partition: Not Supported 00:32:18.799 Memory Page Size Minimum: 4096 bytes 00:32:18.799 Memory Page Size Maximum: 4096 bytes 00:32:18.799 Persistent Memory Region: Not Supported 00:32:18.799 Optional Asynchronous Events Supported 00:32:18.799 Namespace Attribute Notices: Supported 00:32:18.799 Firmware Activation Notices: Not Supported 00:32:18.799 ANA Change Notices: Supported 00:32:18.799 PLE Aggregate Log Change Notices: Not Supported 00:32:18.799 LBA Status Info Alert Notices: Not Supported 00:32:18.799 EGE Aggregate Log Change Notices: Not Supported 00:32:18.799 Normal NVM Subsystem Shutdown event: Not Supported 00:32:18.799 Zone Descriptor Change Notices: Not Supported 00:32:18.799 Discovery Log Change Notices: Not Supported 00:32:18.799 Controller Attributes 00:32:18.799 128-bit Host Identifier: Supported 00:32:18.799 Non-Operational Permissive Mode: Not Supported 00:32:18.799 NVM Sets: Not Supported 00:32:18.799 Read Recovery Levels: Not Supported 00:32:18.799 Endurance Groups: Not Supported 00:32:18.799 Predictable Latency Mode: Not Supported 00:32:18.800 Traffic Based Keep ALive: Supported 00:32:18.800 Namespace Granularity: Not Supported 00:32:18.800 SQ Associations: Not Supported 00:32:18.800 UUID List: Not Supported 00:32:18.800 Multi-Domain Subsystem: Not Supported 00:32:18.800 Fixed Capacity Management: Not Supported 00:32:18.800 Variable Capacity Management: Not Supported 00:32:18.800 Delete Endurance Group: Not Supported 00:32:18.800 Delete NVM Set: Not Supported 00:32:18.800 Extended LBA Formats Supported: Not Supported 00:32:18.800 Flexible Data Placement Supported: Not Supported 00:32:18.800 00:32:18.800 Controller Memory Buffer Support 00:32:18.800 ================================ 00:32:18.800 Supported: No 00:32:18.800 00:32:18.800 Persistent Memory Region Support 00:32:18.800 ================================ 00:32:18.800 Supported: No 00:32:18.800 00:32:18.800 Admin Command Set Attributes 00:32:18.800 ============================ 00:32:18.800 Security Send/Receive: Not Supported 00:32:18.800 Format NVM: Not Supported 00:32:18.800 Firmware Activate/Download: Not Supported 00:32:18.800 Namespace Management: Not Supported 00:32:18.800 Device Self-Test: Not Supported 00:32:18.800 Directives: Not Supported 00:32:18.800 NVMe-MI: Not Supported 00:32:18.800 Virtualization Management: Not Supported 00:32:18.800 Doorbell Buffer Config: Not Supported 00:32:18.800 Get LBA Status Capability: Not Supported 00:32:18.800 Command & Feature Lockdown Capability: Not Supported 00:32:18.800 Abort Command Limit: 4 00:32:18.800 Async Event Request Limit: 4 00:32:18.800 Number of Firmware Slots: N/A 00:32:18.800 Firmware Slot 1 Read-Only: N/A 00:32:18.800 Firmware Activation Without Reset: N/A 00:32:18.800 Multiple Update Detection Support: N/A 00:32:18.800 Firmware Update Granularity: No Information Provided 00:32:18.800 Per-Namespace SMART Log: Yes 00:32:18.800 Asymmetric Namespace Access Log Page: Supported 00:32:18.800 ANA Transition Time : 10 sec 00:32:18.800 00:32:18.800 Asymmetric Namespace Access Capabilities 00:32:18.800 ANA Optimized State : Supported 00:32:18.800 ANA Non-Optimized State : Supported 00:32:18.800 ANA Inaccessible State : Supported 00:32:18.800 ANA Persistent Loss State : Supported 00:32:18.800 ANA Change State : Supported 00:32:18.800 ANAGRPID is not changed : No 00:32:18.800 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:18.800 00:32:18.800 ANA Group Identifier Maximum : 128 00:32:18.800 Number of ANA Group Identifiers : 128 00:32:18.800 Max Number of Allowed Namespaces : 1024 00:32:18.800 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:18.800 Command Effects Log Page: Supported 00:32:18.800 Get Log Page Extended Data: Supported 00:32:18.800 Telemetry Log Pages: Not Supported 00:32:18.800 Persistent Event Log Pages: Not Supported 00:32:18.800 Supported Log Pages Log Page: May Support 00:32:18.800 Commands Supported & Effects Log Page: Not Supported 00:32:18.800 Feature Identifiers & Effects Log Page:May Support 00:32:18.800 NVMe-MI Commands & Effects Log Page: May Support 00:32:18.800 Data Area 4 for Telemetry Log: Not Supported 00:32:18.800 Error Log Page Entries Supported: 128 00:32:18.800 Keep Alive: Supported 00:32:18.800 Keep Alive Granularity: 1000 ms 00:32:18.800 00:32:18.800 NVM Command Set Attributes 00:32:18.800 ========================== 00:32:18.800 Submission Queue Entry Size 00:32:18.800 Max: 64 00:32:18.800 Min: 64 00:32:18.800 Completion Queue Entry Size 00:32:18.800 Max: 16 00:32:18.800 Min: 16 00:32:18.800 Number of Namespaces: 1024 00:32:18.800 Compare Command: Not Supported 00:32:18.800 Write Uncorrectable Command: Not Supported 00:32:18.800 Dataset Management Command: Supported 00:32:18.800 Write Zeroes Command: Supported 00:32:18.800 Set Features Save Field: Not Supported 00:32:18.800 Reservations: Not Supported 00:32:18.800 Timestamp: Not Supported 00:32:18.800 Copy: Not Supported 00:32:18.800 Volatile Write Cache: Present 00:32:18.800 Atomic Write Unit (Normal): 1 00:32:18.800 Atomic Write Unit (PFail): 1 00:32:18.800 Atomic Compare & Write Unit: 1 00:32:18.800 Fused Compare & Write: Not Supported 00:32:18.800 Scatter-Gather List 00:32:18.800 SGL Command Set: Supported 00:32:18.800 SGL Keyed: Not Supported 00:32:18.800 SGL Bit Bucket Descriptor: Not Supported 00:32:18.800 SGL Metadata Pointer: Not Supported 00:32:18.800 Oversized SGL: Not Supported 00:32:18.800 SGL Metadata Address: Not Supported 00:32:18.800 SGL Offset: Supported 00:32:18.800 Transport SGL Data Block: Not Supported 00:32:18.800 Replay Protected Memory Block: Not Supported 00:32:18.800 00:32:18.800 Firmware Slot Information 00:32:18.800 ========================= 00:32:18.800 Active slot: 0 00:32:18.800 00:32:18.800 Asymmetric Namespace Access 00:32:18.800 =========================== 00:32:18.800 Change Count : 0 00:32:18.800 Number of ANA Group Descriptors : 1 00:32:18.800 ANA Group Descriptor : 0 00:32:18.800 ANA Group ID : 1 00:32:18.800 Number of NSID Values : 1 00:32:18.800 Change Count : 0 00:32:18.800 ANA State : 1 00:32:18.800 Namespace Identifier : 1 00:32:18.800 00:32:18.800 Commands Supported and Effects 00:32:18.800 ============================== 00:32:18.800 Admin Commands 00:32:18.800 -------------- 00:32:18.800 Get Log Page (02h): Supported 00:32:18.800 Identify (06h): Supported 00:32:18.800 Abort (08h): Supported 00:32:18.800 Set Features (09h): Supported 00:32:18.800 Get Features (0Ah): Supported 00:32:18.800 Asynchronous Event Request (0Ch): Supported 00:32:18.800 Keep Alive (18h): Supported 00:32:18.800 I/O Commands 00:32:18.800 ------------ 00:32:18.800 Flush (00h): Supported 00:32:18.800 Write (01h): Supported LBA-Change 00:32:18.800 Read (02h): Supported 00:32:18.800 Write Zeroes (08h): Supported LBA-Change 00:32:18.800 Dataset Management (09h): Supported 00:32:18.800 00:32:18.800 Error Log 00:32:18.800 ========= 00:32:18.800 Entry: 0 00:32:18.800 Error Count: 0x3 00:32:18.800 Submission Queue Id: 0x0 00:32:18.800 Command Id: 0x5 00:32:18.800 Phase Bit: 0 00:32:18.800 Status Code: 0x2 00:32:18.800 Status Code Type: 0x0 00:32:18.800 Do Not Retry: 1 00:32:18.800 Error Location: 0x28 00:32:18.800 LBA: 0x0 00:32:18.800 Namespace: 0x0 00:32:18.800 Vendor Log Page: 0x0 00:32:18.800 ----------- 00:32:18.800 Entry: 1 00:32:18.800 Error Count: 0x2 00:32:18.800 Submission Queue Id: 0x0 00:32:18.800 Command Id: 0x5 00:32:18.800 Phase Bit: 0 00:32:18.800 Status Code: 0x2 00:32:18.800 Status Code Type: 0x0 00:32:18.800 Do Not Retry: 1 00:32:18.800 Error Location: 0x28 00:32:18.800 LBA: 0x0 00:32:18.800 Namespace: 0x0 00:32:18.800 Vendor Log Page: 0x0 00:32:18.800 ----------- 00:32:18.801 Entry: 2 00:32:18.801 Error Count: 0x1 00:32:18.801 Submission Queue Id: 0x0 00:32:18.801 Command Id: 0x4 00:32:18.801 Phase Bit: 0 00:32:18.801 Status Code: 0x2 00:32:18.801 Status Code Type: 0x0 00:32:18.801 Do Not Retry: 1 00:32:18.801 Error Location: 0x28 00:32:18.801 LBA: 0x0 00:32:18.801 Namespace: 0x0 00:32:18.801 Vendor Log Page: 0x0 00:32:18.801 00:32:18.801 Number of Queues 00:32:18.801 ================ 00:32:18.801 Number of I/O Submission Queues: 128 00:32:18.801 Number of I/O Completion Queues: 128 00:32:18.801 00:32:18.801 ZNS Specific Controller Data 00:32:18.801 ============================ 00:32:18.801 Zone Append Size Limit: 0 00:32:18.801 00:32:18.801 00:32:18.801 Active Namespaces 00:32:18.801 ================= 00:32:18.801 get_feature(0x05) failed 00:32:18.801 Namespace ID:1 00:32:18.801 Command Set Identifier: NVM (00h) 00:32:18.801 Deallocate: Supported 00:32:18.801 Deallocated/Unwritten Error: Not Supported 00:32:18.801 Deallocated Read Value: Unknown 00:32:18.801 Deallocate in Write Zeroes: Not Supported 00:32:18.801 Deallocated Guard Field: 0xFFFF 00:32:18.801 Flush: Supported 00:32:18.801 Reservation: Not Supported 00:32:18.801 Namespace Sharing Capabilities: Multiple Controllers 00:32:18.801 Size (in LBAs): 3750748848 (1788GiB) 00:32:18.801 Capacity (in LBAs): 3750748848 (1788GiB) 00:32:18.801 Utilization (in LBAs): 3750748848 (1788GiB) 00:32:18.801 UUID: 81f465dd-ce56-4af9-aa6c-19d7d1f002d0 00:32:18.801 Thin Provisioning: Not Supported 00:32:18.801 Per-NS Atomic Units: Yes 00:32:18.801 Atomic Write Unit (Normal): 8 00:32:18.801 Atomic Write Unit (PFail): 8 00:32:18.801 Preferred Write Granularity: 8 00:32:18.801 Atomic Compare & Write Unit: 8 00:32:18.801 Atomic Boundary Size (Normal): 0 00:32:18.801 Atomic Boundary Size (PFail): 0 00:32:18.801 Atomic Boundary Offset: 0 00:32:18.801 NGUID/EUI64 Never Reused: No 00:32:18.801 ANA group ID: 1 00:32:18.801 Namespace Write Protected: No 00:32:18.801 Number of LBA Formats: 1 00:32:18.801 Current LBA Format: LBA Format #00 00:32:18.801 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:18.801 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@99 -- # sync 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # set +e 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:18.801 rmmod nvme_tcp 00:32:18.801 rmmod nvme_fabrics 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # set -e 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # return 0 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # nvmf_fini 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@254 -- # local dev 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:18.801 08:29:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # return 0 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # _dev=0 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # dev_map=() 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@274 -- # iptr 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-save 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-restore 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # echo 0 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:32:21.346 08:29:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:25.566 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:25.566 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:25.566 00:32:25.566 real 0m20.941s 00:32:25.566 user 0m5.690s 00:32:25.566 sys 0m12.308s 00:32:25.566 08:29:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.566 08:29:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:25.566 ************************************ 00:32:25.566 END TEST nvmf_identify_kernel_target 00:32:25.566 ************************************ 00:32:25.566 08:29:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:25.566 08:29:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:25.566 08:29:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.566 08:29:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.566 ************************************ 00:32:25.566 START TEST nvmf_auth_host 00:32:25.566 ************************************ 00:32:25.566 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:25.566 * Looking for test storage... 00:32:25.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:25.566 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:25.566 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:32:25.566 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:25.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.827 --rc genhtml_branch_coverage=1 00:32:25.827 --rc genhtml_function_coverage=1 00:32:25.827 --rc genhtml_legend=1 00:32:25.827 --rc geninfo_all_blocks=1 00:32:25.827 --rc geninfo_unexecuted_blocks=1 00:32:25.827 00:32:25.827 ' 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:25.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.827 --rc genhtml_branch_coverage=1 00:32:25.827 --rc genhtml_function_coverage=1 00:32:25.827 --rc genhtml_legend=1 00:32:25.827 --rc geninfo_all_blocks=1 00:32:25.827 --rc geninfo_unexecuted_blocks=1 00:32:25.827 00:32:25.827 ' 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:25.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.827 --rc genhtml_branch_coverage=1 00:32:25.827 --rc genhtml_function_coverage=1 00:32:25.827 --rc genhtml_legend=1 00:32:25.827 --rc geninfo_all_blocks=1 00:32:25.827 --rc geninfo_unexecuted_blocks=1 00:32:25.827 00:32:25.827 ' 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:25.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.827 --rc genhtml_branch_coverage=1 00:32:25.827 --rc genhtml_function_coverage=1 00:32:25.827 --rc genhtml_legend=1 00:32:25.827 --rc geninfo_all_blocks=1 00:32:25.827 --rc geninfo_unexecuted_blocks=1 00:32:25.827 00:32:25.827 ' 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.827 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@50 -- # : 0 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:32:25.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # remove_target_ns 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # xtrace_disable 00:32:25.828 08:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # pci_devs=() 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # net_devs=() 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # e810=() 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # local -ga e810 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # x722=() 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # local -ga x722 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # mlx=() 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # local -ga mlx 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:33.971 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:33.971 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:33.971 Found net devices under 0000:31:00.0: cvl_0_0 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:33.971 Found net devices under 0000:31:00.1: cvl_0_1 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # is_hw=yes 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@247 -- # create_target_ns 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:33.971 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@28 -- # local -g _dev 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772161 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:33.972 10.0.0.1 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772162 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:33.972 10.0.0.2 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:33.972 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:34.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.584 ms 00:32:34.234 00:32:34.234 --- 10.0.0.1 ping statistics --- 00:32:34.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.234 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:32:34.234 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:32:34.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:32:34.234 00:32:34.234 --- 10.0.0.2 ping statistics --- 00:32:34.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.234 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # return 0 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # return 1 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev= 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@160 -- # return 0 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target1 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # return 1 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev= 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@160 -- # return 0 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # nvmfpid=2159140 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # waitforlisten 2159140 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2159140 ']' 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:34.235 08:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.176 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:35.176 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:32:35.176 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:35.176 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:35.176 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.176 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:35.176 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:35.176 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:35.176 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:35.176 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:35.176 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:35.177 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:32:35.177 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:32:35.177 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:35.177 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=c843f50128851c69f56d8a2ef95aa508 00:32:35.177 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:32:35.177 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.O1I 00:32:35.177 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key c843f50128851c69f56d8a2ef95aa508 0 00:32:35.177 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 c843f50128851c69f56d8a2ef95aa508 0 00:32:35.177 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:35.177 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:35.177 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=c843f50128851c69f56d8a2ef95aa508 00:32:35.177 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:32:35.177 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.O1I 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.O1I 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.O1I 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=d22a6eb811d95c7b5d87e7f12c1bafcb615f5376c59dc52805b36f35e6930b60 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.fzW 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key d22a6eb811d95c7b5d87e7f12c1bafcb615f5376c59dc52805b36f35e6930b60 3 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 d22a6eb811d95c7b5d87e7f12c1bafcb615f5376c59dc52805b36f35e6930b60 3 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=d22a6eb811d95c7b5d87e7f12c1bafcb615f5376c59dc52805b36f35e6930b60 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.fzW 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.fzW 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.fzW 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:32:35.438 08:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:35.438 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=3086f7e6bd5373eae59b09ee9b688ac70310e3c8b827a8c8 00:32:35.438 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:32:35.438 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.a6m 00:32:35.438 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 3086f7e6bd5373eae59b09ee9b688ac70310e3c8b827a8c8 0 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 3086f7e6bd5373eae59b09ee9b688ac70310e3c8b827a8c8 0 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=3086f7e6bd5373eae59b09ee9b688ac70310e3c8b827a8c8 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.a6m 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.a6m 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.a6m 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=17f3ac42d807c7773b2e27e863c4517d94a5a5e8bd8e0c62 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.apq 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 17f3ac42d807c7773b2e27e863c4517d94a5a5e8bd8e0c62 2 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 17f3ac42d807c7773b2e27e863c4517d94a5a5e8bd8e0c62 2 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=17f3ac42d807c7773b2e27e863c4517d94a5a5e8bd8e0c62 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.apq 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.apq 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.apq 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=06e144e3f02ad01090c553a8e49abec6 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.hBc 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 06e144e3f02ad01090c553a8e49abec6 1 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 06e144e3f02ad01090c553a8e49abec6 1 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=06e144e3f02ad01090c553a8e49abec6 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:32:35.439 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.hBc 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.hBc 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.hBc 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=e22a4426da2d84df8a74193ffed0bc9f 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.x2Q 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key e22a4426da2d84df8a74193ffed0bc9f 1 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 e22a4426da2d84df8a74193ffed0bc9f 1 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=e22a4426da2d84df8a74193ffed0bc9f 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.x2Q 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.x2Q 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.x2Q 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=ab9662e2b9371e8339fb35de08e82e41983e0240872405a0 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.eTE 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key ab9662e2b9371e8339fb35de08e82e41983e0240872405a0 2 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 ab9662e2b9371e8339fb35de08e82e41983e0240872405a0 2 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=ab9662e2b9371e8339fb35de08e82e41983e0240872405a0 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.eTE 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.eTE 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.eTE 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=de0102abaee4b5613e9af96e94704d3e 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.tQm 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key de0102abaee4b5613e9af96e94704d3e 0 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 de0102abaee4b5613e9af96e94704d3e 0 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=de0102abaee4b5613e9af96e94704d3e 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.tQm 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.tQm 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.tQm 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:32:35.701 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:32:35.702 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:35.702 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=7bf799a49caee77019ceb2577ddbb38bfee707fa8d419012a7c6e85a68ef57cf 00:32:35.702 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:32:35.702 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.1Y2 00:32:35.702 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 7bf799a49caee77019ceb2577ddbb38bfee707fa8d419012a7c6e85a68ef57cf 3 00:32:35.702 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 7bf799a49caee77019ceb2577ddbb38bfee707fa8d419012a7c6e85a68ef57cf 3 00:32:35.702 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:35.702 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:35.702 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=7bf799a49caee77019ceb2577ddbb38bfee707fa8d419012a7c6e85a68ef57cf 00:32:35.702 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:32:35.702 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.1Y2 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.1Y2 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.1Y2 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2159140 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2159140 ']' 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.O1I 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.fzW ]] 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fzW 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.a6m 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.apq ]] 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.apq 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.hBc 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.x2Q ]] 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.x2Q 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.eTE 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.963 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.964 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.tQm ]] 00:32:35.964 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.tQm 00:32:35.964 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.964 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.964 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.964 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:35.964 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.1Y2 00:32:35.964 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.964 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # local block nvme 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # modprobe nvmet 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:36.225 08:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:40.519 Waiting for block devices as requested 00:32:40.519 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:40.519 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:40.519 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:40.519 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:40.519 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:40.519 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:40.519 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:40.519 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:40.519 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:40.779 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:40.779 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:40.779 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:41.039 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:41.039 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:41.039 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:41.039 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:41.299 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:42.241 No valid GPT data, bailing 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # echo 1 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@471 -- # echo 1 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # echo tcp 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@475 -- # echo 4420 00:32:42.241 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # echo ipv4 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:32:42.242 00:32:42.242 Discovery Log Number of Records 2, Generation counter 2 00:32:42.242 =====Discovery Log Entry 0====== 00:32:42.242 trtype: tcp 00:32:42.242 adrfam: ipv4 00:32:42.242 subtype: current discovery subsystem 00:32:42.242 treq: not specified, sq flow control disable supported 00:32:42.242 portid: 1 00:32:42.242 trsvcid: 4420 00:32:42.242 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:42.242 traddr: 10.0.0.1 00:32:42.242 eflags: none 00:32:42.242 sectype: none 00:32:42.242 =====Discovery Log Entry 1====== 00:32:42.242 trtype: tcp 00:32:42.242 adrfam: ipv4 00:32:42.242 subtype: nvme subsystem 00:32:42.242 treq: not specified, sq flow control disable supported 00:32:42.242 portid: 1 00:32:42.242 trsvcid: 4420 00:32:42.242 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:42.242 traddr: 10.0.0.1 00:32:42.242 eflags: none 00:32:42.242 sectype: none 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.242 08:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.503 nvme0n1 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.503 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: ]] 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.504 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.764 nvme0n1 00:32:42.764 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.764 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.765 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.026 nvme0n1 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.026 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.287 nvme0n1 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: ]] 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.287 08:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.549 nvme0n1 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.549 nvme0n1 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.549 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: ]] 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.810 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.071 nvme0n1 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.071 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.333 nvme0n1 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.333 08:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.595 nvme0n1 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: ]] 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.595 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.856 nvme0n1 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:44.856 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:44.857 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:44.857 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:44.857 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:44.857 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:44.857 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:44.857 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:44.857 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:44.857 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:44.857 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:44.857 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:44.857 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:44.857 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:44.857 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.857 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.118 nvme0n1 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: ]] 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.118 08:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.379 nvme0n1 00:32:45.379 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.379 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.379 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.379 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.379 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.379 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.641 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.902 nvme0n1 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:45.902 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:45.903 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:45.903 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:45.903 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:45.903 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.903 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.163 nvme0n1 00:32:46.163 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.163 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.163 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.163 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.163 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.163 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.163 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.163 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.163 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.163 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: ]] 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.424 08:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.684 nvme0n1 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:46.684 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.685 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.946 nvme0n1 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: ]] 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:46.946 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:47.206 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:47.206 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:47.206 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:47.206 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.207 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.207 08:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.466 nvme0n1 00:32:47.466 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.467 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.467 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.467 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.467 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.467 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.467 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.467 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.467 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.467 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.727 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.989 nvme0n1 00:32:47.989 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.989 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.989 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.989 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.989 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.989 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:32:48.249 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.250 08:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.820 nvme0n1 00:32:48.820 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: ]] 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.821 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.393 nvme0n1 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.393 08:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.965 nvme0n1 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: ]] 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.965 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.536 nvme0n1 00:32:50.537 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.537 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.537 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.537 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.537 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.537 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.797 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.368 nvme0n1 00:32:51.368 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.368 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.368 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.368 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.368 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.368 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.628 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.199 nvme0n1 00:32:52.199 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.460 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.460 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.460 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.460 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.460 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.460 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: ]] 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.461 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.461 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.402 nvme0n1 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:53.402 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:53.403 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:53.403 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:53.403 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:53.403 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:53.403 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:53.403 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:53.403 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:53.403 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:53.403 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.403 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.975 nvme0n1 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: ]] 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.975 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.236 nvme0n1 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.236 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.237 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.497 nvme0n1 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.497 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.757 nvme0n1 00:32:54.757 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.757 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.757 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.757 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.757 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.757 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.757 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.757 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.757 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: ]] 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.758 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.019 nvme0n1 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.019 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.281 nvme0n1 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: ]] 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.281 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.542 nvme0n1 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.542 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.804 nvme0n1 00:32:55.804 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.804 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.804 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.804 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.804 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.804 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.804 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.804 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.805 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.066 nvme0n1 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: ]] 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:56.066 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:56.067 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:56.067 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:56.067 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:56.067 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.067 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.328 nvme0n1 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:56.328 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.329 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.329 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.329 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.329 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:56.329 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:56.329 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:56.329 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:56.329 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:56.329 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:56.329 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:56.329 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:56.329 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:56.329 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:56.329 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:56.329 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:56.329 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:56.329 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:56.329 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:56.329 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:56.329 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.329 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.589 nvme0n1 00:32:56.589 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.589 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.589 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.589 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.589 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.589 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.589 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.589 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: ]] 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.590 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.850 nvme0n1 00:32:56.850 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.850 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.850 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.850 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.850 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.850 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.111 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.372 nvme0n1 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.372 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.372 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.633 nvme0n1 00:32:57.633 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.633 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.633 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.633 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.633 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.633 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.633 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.633 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.633 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.633 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: ]] 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.894 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.155 nvme0n1 00:32:58.155 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.155 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.155 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.155 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.155 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.155 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.156 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.416 nvme0n1 00:32:58.416 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.416 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.416 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.416 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.416 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.416 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.416 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: ]] 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.417 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.678 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.939 nvme0n1 00:32:58.939 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.939 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.939 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.939 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.939 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.939 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.939 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.939 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.939 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.940 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:59.200 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.201 08:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.462 nvme0n1 00:32:59.462 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.462 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.462 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.462 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.463 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.463 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.723 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.723 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.723 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.723 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.723 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.723 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.723 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.724 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.295 nvme0n1 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.295 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: ]] 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.296 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.868 nvme0n1 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:00.868 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.869 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.130 nvme0n1 00:33:01.130 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.130 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.130 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.130 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.130 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.391 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.391 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.391 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.391 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.391 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.391 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.391 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:01.391 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.391 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:01.391 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.391 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.391 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:01.391 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: ]] 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.392 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.335 nvme0n1 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.335 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.906 nvme0n1 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:02.906 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:03.166 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:03.166 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:03.166 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:03.166 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:03.166 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:03.166 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:03.166 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:03.166 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:03.166 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:03.166 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:03.166 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:03.167 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.167 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.167 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.740 nvme0n1 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: ]] 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.740 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.001 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.001 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.001 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:04.001 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:04.001 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:04.001 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:04.001 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:04.001 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:04.002 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:04.002 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:04.002 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:04.002 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:04.002 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:04.002 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:04.002 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:04.002 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:04.002 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:04.002 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:04.002 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.002 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.574 nvme0n1 00:33:04.574 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.574 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.574 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.574 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.574 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.574 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.574 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.574 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.574 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.574 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.834 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.834 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.834 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:04.834 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.834 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.834 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:04.834 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:04.834 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:33:04.834 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:04.834 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.834 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:04.834 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:33:04.834 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:04.834 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.835 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.406 nvme0n1 00:33:05.406 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.406 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.406 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.406 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.406 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.406 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: ]] 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.667 nvme0n1 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.667 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.929 nvme0n1 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.929 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.191 nvme0n1 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:33:06.191 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: ]] 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.453 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.453 nvme0n1 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.453 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.454 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:33:06.454 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:06.454 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:06.454 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.454 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.454 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:06.454 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:06.454 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.454 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:06.454 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.454 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.715 nvme0n1 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: ]] 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:06.715 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.976 nvme0n1 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.976 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.977 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.237 nvme0n1 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.237 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.497 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.497 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.497 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.497 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:07.498 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.498 nvme0n1 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.498 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: ]] 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.759 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.020 nvme0n1 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.020 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.021 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.282 nvme0n1 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: ]] 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:08.282 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.283 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.545 nvme0n1 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.545 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.806 nvme0n1 00:33:08.806 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.806 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.806 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.806 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.806 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.806 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.067 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.328 nvme0n1 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: ]] 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:09.328 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:09.329 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:09.329 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:09.329 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:09.329 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:09.329 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:09.329 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:09.329 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.329 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.589 nvme0n1 00:33:09.589 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.589 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.589 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.589 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.589 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.589 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.589 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.589 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.589 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.589 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.850 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.110 nvme0n1 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:33:10.110 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: ]] 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.111 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.684 nvme0n1 00:33:10.684 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.684 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.684 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.685 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.257 nvme0n1 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.257 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.258 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.831 nvme0n1 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: ]] 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.831 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.403 nvme0n1 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:12.403 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:12.404 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:12.404 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:12.404 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:12.404 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:12.404 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:12.404 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:12.404 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:12.404 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:12.404 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:12.404 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.404 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.665 nvme0n1 00:33:12.665 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.665 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.665 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.665 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.665 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:12.927 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg0M2Y1MDEyODg1MWM2OWY1NmQ4YTJlZjk1YWE1MDirqZDT: 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: ]] 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIyYTZlYjgxMWQ5NWM3YjVkODdlN2YxMmMxYmFmY2I2MTVmNTM3NmM1OWRjNTI4MDViMzZmMzVlNjkzMGI2MNuDSbM=: 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.928 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.870 nvme0n1 00:33:13.870 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.870 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.870 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.870 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.871 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.443 nvme0n1 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:14.443 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:14.702 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:14.702 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:14.702 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:14.702 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:14.703 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:14.703 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:14.703 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:14.703 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:14.703 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:14.703 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:14.703 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:14.703 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:14.703 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.703 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.273 nvme0n1 00:33:15.273 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.273 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.273 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.273 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.273 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.273 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.273 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.273 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.273 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.273 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.273 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.534 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.534 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:15.534 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.534 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.534 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:15.534 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWI5NjYyZTJiOTM3MWU4MzM5ZmIzNWRlMDhlODJlNDE5ODNlMDI0MDg3MjQwNWEwQYsJHQ==: 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: ]] 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGUwMTAyYWJhZWU0YjU2MTNlOWFmOTZlOTQ3MDRkM2URx1Kt: 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.534 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.105 nvme0n1 00:33:16.105 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.105 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.105 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.105 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.105 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.105 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JmNzk5YTQ5Y2FlZTc3MDE5Y2ViMjU3N2RkYmIzOGJmZWU3MDdmYThkNDE5MDEyYTdjNmU4NWE2OGVmNTdjZuaEWds=: 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.366 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.937 nvme0n1 00:33:16.937 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.937 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.937 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.937 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.937 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.937 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.203 request: 00:33:17.203 { 00:33:17.203 "name": "nvme0", 00:33:17.203 "trtype": "tcp", 00:33:17.203 "traddr": "10.0.0.1", 00:33:17.203 "adrfam": "ipv4", 00:33:17.203 "trsvcid": "4420", 00:33:17.203 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:17.203 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:17.203 "prchk_reftag": false, 00:33:17.203 "prchk_guard": false, 00:33:17.203 "hdgst": false, 00:33:17.203 "ddgst": false, 00:33:17.203 "allow_unrecognized_csi": false, 00:33:17.203 "method": "bdev_nvme_attach_controller", 00:33:17.203 "req_id": 1 00:33:17.203 } 00:33:17.203 Got JSON-RPC error response 00:33:17.203 response: 00:33:17.203 { 00:33:17.203 "code": -5, 00:33:17.203 "message": "Input/output error" 00:33:17.203 } 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.203 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.510 request: 00:33:17.510 { 00:33:17.510 "name": "nvme0", 00:33:17.510 "trtype": "tcp", 00:33:17.510 "traddr": "10.0.0.1", 00:33:17.510 "adrfam": "ipv4", 00:33:17.510 "trsvcid": "4420", 00:33:17.510 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:17.510 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:17.510 "prchk_reftag": false, 00:33:17.510 "prchk_guard": false, 00:33:17.510 "hdgst": false, 00:33:17.510 "ddgst": false, 00:33:17.510 "dhchap_key": "key2", 00:33:17.510 "allow_unrecognized_csi": false, 00:33:17.510 "method": "bdev_nvme_attach_controller", 00:33:17.510 "req_id": 1 00:33:17.510 } 00:33:17.510 Got JSON-RPC error response 00:33:17.510 response: 00:33:17.510 { 00:33:17.510 "code": -5, 00:33:17.510 "message": "Input/output error" 00:33:17.510 } 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:17.510 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:17.510 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:17.510 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:17.510 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:17.510 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:17.510 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:17.510 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:17.510 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:17.510 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.510 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:17.510 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.510 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:17.510 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.510 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.511 request: 00:33:17.511 { 00:33:17.511 "name": "nvme0", 00:33:17.511 "trtype": "tcp", 00:33:17.511 "traddr": "10.0.0.1", 00:33:17.511 "adrfam": "ipv4", 00:33:17.511 "trsvcid": "4420", 00:33:17.511 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:17.511 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:17.511 "prchk_reftag": false, 00:33:17.511 "prchk_guard": false, 00:33:17.511 "hdgst": false, 00:33:17.511 "ddgst": false, 00:33:17.511 "dhchap_key": "key1", 00:33:17.511 "dhchap_ctrlr_key": "ckey2", 00:33:17.511 "allow_unrecognized_csi": false, 00:33:17.511 "method": "bdev_nvme_attach_controller", 00:33:17.511 "req_id": 1 00:33:17.511 } 00:33:17.511 Got JSON-RPC error response 00:33:17.511 response: 00:33:17.511 { 00:33:17.511 "code": -5, 00:33:17.511 "message": "Input/output error" 00:33:17.511 } 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.511 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.822 nvme0n1 00:33:17.822 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.822 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:17.822 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.822 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:17.822 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:17.822 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:17.822 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:17.822 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:17.822 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:17.822 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:17.822 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:17.822 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:33:17.822 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.823 request: 00:33:17.823 { 00:33:17.823 "name": "nvme0", 00:33:17.823 "dhchap_key": "key1", 00:33:17.823 "dhchap_ctrlr_key": "ckey2", 00:33:17.823 "method": "bdev_nvme_set_keys", 00:33:17.823 "req_id": 1 00:33:17.823 } 00:33:17.823 Got JSON-RPC error response 00:33:17.823 response: 00:33:17.823 { 00:33:17.823 "code": -13, 00:33:17.823 "message": "Permission denied" 00:33:17.823 } 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:33:17.823 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:33:19.208 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.208 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:33:19.208 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.208 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.208 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.208 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:33:19.208 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzA4NmY3ZTZiZDUzNzNlYWU1OWIwOWVlOWI2ODhhYzcwMzEwZTNjOGI4MjdhOGM4ttVTMA==: 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: ]] 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTdmM2FjNDJkODA3Yzc3NzNiMmUyN2U4NjNjNDUxN2Q5NGE1YTVlOGJkOGUwYzYyQOnAkg==: 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:20.153 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.154 nvme0n1 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDZlMTQ0ZTNmMDJhZDAxMDkwYzU1M2E4ZTQ5YWJlYzZ3zpft: 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: ]] 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTIyYTQ0MjZkYTJkODRkZjhhNzQxOTNmZmVkMGJjOWZ59TRP: 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.154 request: 00:33:20.154 { 00:33:20.154 "name": "nvme0", 00:33:20.154 "dhchap_key": "key2", 00:33:20.154 "dhchap_ctrlr_key": "ckey1", 00:33:20.154 "method": "bdev_nvme_set_keys", 00:33:20.154 "req_id": 1 00:33:20.154 } 00:33:20.154 Got JSON-RPC error response 00:33:20.154 response: 00:33:20.154 { 00:33:20.154 "code": -13, 00:33:20.154 "message": "Permission denied" 00:33:20.154 } 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.154 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.415 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:33:20.415 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:33:21.357 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.357 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:33:21.357 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.357 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.357 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.357 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:33:21.357 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:33:21.357 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:33:21.357 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:21.357 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:21.357 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@99 -- # sync 00:33:21.357 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:21.357 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # set +e 00:33:21.357 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:21.357 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:21.357 rmmod nvme_tcp 00:33:21.357 rmmod nvme_fabrics 00:33:21.357 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:21.357 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # set -e 00:33:21.357 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # return 0 00:33:21.357 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # '[' -n 2159140 ']' 00:33:21.357 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@337 -- # killprocess 2159140 00:33:21.357 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2159140 ']' 00:33:21.357 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2159140 00:33:21.357 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:33:21.357 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:21.357 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2159140 00:33:21.618 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:21.618 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:21.618 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2159140' 00:33:21.618 killing process with pid 2159140 00:33:21.618 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2159140 00:33:21.618 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2159140 00:33:21.618 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:21.618 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # nvmf_fini 00:33:21.618 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@254 -- # local dev 00:33:21.618 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:21.618 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:21.618 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:21.618 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # return 0 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:33:23.534 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # _dev=0 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # dev_map=() 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@274 -- # iptr 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-save 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-restore 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # echo 0 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:33:23.796 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:28.004 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:28.004 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:28.265 08:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.O1I /tmp/spdk.key-null.a6m /tmp/spdk.key-sha256.hBc /tmp/spdk.key-sha384.eTE /tmp/spdk.key-sha512.1Y2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:28.265 08:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:32.476 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:32.476 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:32.476 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:32.476 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:32.476 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:32.477 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:32.477 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:32.477 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:32.477 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:32.477 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:33:32.477 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:32.477 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:32.477 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:32.477 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:32.477 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:32.477 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:32.477 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:32.477 00:33:32.477 real 1m7.017s 00:33:32.477 user 0m59.868s 00:33:32.477 sys 0m18.245s 00:33:32.477 08:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:32.477 08:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.477 ************************************ 00:33:32.477 END TEST nvmf_auth_host 00:33:32.477 ************************************ 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.738 ************************************ 00:33:32.738 START TEST nvmf_bdevperf 00:33:32.738 ************************************ 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:32.738 * Looking for test storage... 00:33:32.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:32.738 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:32.739 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:33:32.739 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:33:32.739 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:32.739 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:33:32.739 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:32.739 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:33:32.739 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:33:32.739 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:32.739 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:33:32.739 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:32.739 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:32.739 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:32.739 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:33.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.001 --rc genhtml_branch_coverage=1 00:33:33.001 --rc genhtml_function_coverage=1 00:33:33.001 --rc genhtml_legend=1 00:33:33.001 --rc geninfo_all_blocks=1 00:33:33.001 --rc geninfo_unexecuted_blocks=1 00:33:33.001 00:33:33.001 ' 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:33.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.001 --rc genhtml_branch_coverage=1 00:33:33.001 --rc genhtml_function_coverage=1 00:33:33.001 --rc genhtml_legend=1 00:33:33.001 --rc geninfo_all_blocks=1 00:33:33.001 --rc geninfo_unexecuted_blocks=1 00:33:33.001 00:33:33.001 ' 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:33.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.001 --rc genhtml_branch_coverage=1 00:33:33.001 --rc genhtml_function_coverage=1 00:33:33.001 --rc genhtml_legend=1 00:33:33.001 --rc geninfo_all_blocks=1 00:33:33.001 --rc geninfo_unexecuted_blocks=1 00:33:33.001 00:33:33.001 ' 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:33.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.001 --rc genhtml_branch_coverage=1 00:33:33.001 --rc genhtml_function_coverage=1 00:33:33.001 --rc genhtml_legend=1 00:33:33.001 --rc geninfo_all_blocks=1 00:33:33.001 --rc geninfo_unexecuted_blocks=1 00:33:33.001 00:33:33.001 ' 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@50 -- # : 0 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:33.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:33.001 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:33.002 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:33.002 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:33.002 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # remove_target_ns 00:33:33.002 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:33.002 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:33.002 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:33.002 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:33.002 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:33.002 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # xtrace_disable 00:33:33.002 08:30:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # pci_devs=() 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # net_devs=() 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # e810=() 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # local -ga e810 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # x722=() 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # local -ga x722 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # mlx=() 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # local -ga mlx 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:41.150 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:41.151 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:41.151 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:41.151 Found net devices under 0000:31:00.0: cvl_0_0 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:41.151 Found net devices under 0000:31:00.1: cvl_0_1 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # is_hw=yes 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@247 -- # create_target_ns 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:41.151 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@28 -- # local -g _dev 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # ips=() 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772161 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:41.152 10.0.0.1 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772162 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:41.152 10.0.0.2 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:41.152 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:41.153 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:41.153 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:41.153 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:41.153 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:41.153 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:41.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:41.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.441 ms 00:33:41.415 00:33:41.415 --- 10.0.0.1 ping statistics --- 00:33:41.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.415 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=target0 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:41.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:41.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:33:41.415 00:33:41.415 --- 10.0.0.2 ping statistics --- 00:33:41.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.415 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # return 0 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:41.415 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # return 1 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev= 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@160 -- # return 0 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=target0 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=target1 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # return 1 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev= 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@160 -- # return 0 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:41.416 08:30:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:41.416 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:41.416 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:41.416 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:41.416 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:41.416 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:41.416 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=2178762 00:33:41.416 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 2178762 00:33:41.416 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:41.416 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2178762 ']' 00:33:41.416 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.416 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:41.416 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.416 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:41.416 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:41.416 [2024-11-20 08:30:46.094110] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:33:41.416 [2024-11-20 08:30:46.094179] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:41.677 [2024-11-20 08:30:46.203675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:41.677 [2024-11-20 08:30:46.255445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:41.677 [2024-11-20 08:30:46.255499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:41.677 [2024-11-20 08:30:46.255508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:41.677 [2024-11-20 08:30:46.255515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:41.677 [2024-11-20 08:30:46.255521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:41.677 [2024-11-20 08:30:46.257645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:41.677 [2024-11-20 08:30:46.257811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.677 [2024-11-20 08:30:46.257810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:42.249 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:42.249 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:33:42.249 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:42.249 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:42.249 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:42.249 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.249 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:42.249 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.249 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:42.249 [2024-11-20 08:30:46.953087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.249 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.249 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:42.249 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.249 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:42.510 Malloc0 00:33:42.510 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.510 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:42.510 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.510 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:42.510 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.510 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:42.510 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.510 08:30:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:42.510 08:30:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.510 08:30:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:42.510 08:30:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.510 08:30:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:42.510 [2024-11-20 08:30:47.009370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.510 08:30:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.510 08:30:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:42.510 08:30:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:42.510 08:30:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:33:42.510 08:30:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:33:42.510 08:30:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:42.510 08:30:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:42.510 { 00:33:42.510 "params": { 00:33:42.510 "name": "Nvme$subsystem", 00:33:42.510 "trtype": "$TEST_TRANSPORT", 00:33:42.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:42.510 "adrfam": "ipv4", 00:33:42.510 "trsvcid": "$NVMF_PORT", 00:33:42.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:42.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:42.510 "hdgst": ${hdgst:-false}, 00:33:42.510 "ddgst": ${ddgst:-false} 00:33:42.510 }, 00:33:42.510 "method": "bdev_nvme_attach_controller" 00:33:42.510 } 00:33:42.510 EOF 00:33:42.510 )") 00:33:42.510 08:30:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:33:42.510 08:30:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:33:42.510 08:30:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:33:42.510 08:30:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:42.510 "params": { 00:33:42.510 "name": "Nvme1", 00:33:42.510 "trtype": "tcp", 00:33:42.510 "traddr": "10.0.0.2", 00:33:42.510 "adrfam": "ipv4", 00:33:42.510 "trsvcid": "4420", 00:33:42.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:42.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:42.510 "hdgst": false, 00:33:42.510 "ddgst": false 00:33:42.510 }, 00:33:42.510 "method": "bdev_nvme_attach_controller" 00:33:42.510 }' 00:33:42.510 [2024-11-20 08:30:47.063468] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:33:42.510 [2024-11-20 08:30:47.063526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178842 ] 00:33:42.510 [2024-11-20 08:30:47.141879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.510 [2024-11-20 08:30:47.178479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.772 Running I/O for 1 seconds... 00:33:44.159 8791.00 IOPS, 34.34 MiB/s 00:33:44.159 Latency(us) 00:33:44.159 [2024-11-20T07:30:48.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.159 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:44.159 Verification LBA range: start 0x0 length 0x4000 00:33:44.159 Nvme1n1 : 1.01 8871.18 34.65 0.00 0.00 14357.40 1078.61 12178.77 00:33:44.159 [2024-11-20T07:30:48.888Z] =================================================================================================================== 00:33:44.159 [2024-11-20T07:30:48.888Z] Total : 8871.18 34.65 0.00 0.00 14357.40 1078.61 12178.77 00:33:44.159 08:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2179140 00:33:44.159 08:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:44.159 08:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:44.159 08:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:44.159 08:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:33:44.159 08:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:33:44.159 08:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:44.159 08:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:44.159 { 00:33:44.159 "params": { 00:33:44.159 "name": "Nvme$subsystem", 00:33:44.159 "trtype": "$TEST_TRANSPORT", 00:33:44.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:44.159 "adrfam": "ipv4", 00:33:44.159 "trsvcid": "$NVMF_PORT", 00:33:44.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:44.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:44.159 "hdgst": ${hdgst:-false}, 00:33:44.159 "ddgst": ${ddgst:-false} 00:33:44.159 }, 00:33:44.159 "method": "bdev_nvme_attach_controller" 00:33:44.159 } 00:33:44.159 EOF 00:33:44.159 )") 00:33:44.159 08:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:33:44.160 08:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:33:44.160 08:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:33:44.160 08:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:44.160 "params": { 00:33:44.160 "name": "Nvme1", 00:33:44.160 "trtype": "tcp", 00:33:44.160 "traddr": "10.0.0.2", 00:33:44.160 "adrfam": "ipv4", 00:33:44.160 "trsvcid": "4420", 00:33:44.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:44.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:44.160 "hdgst": false, 00:33:44.160 "ddgst": false 00:33:44.160 }, 00:33:44.160 "method": "bdev_nvme_attach_controller" 00:33:44.160 }' 00:33:44.160 [2024-11-20 08:30:48.621890] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:33:44.160 [2024-11-20 08:30:48.621947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179140 ] 00:33:44.160 [2024-11-20 08:30:48.700461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.160 [2024-11-20 08:30:48.734813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.420 Running I/O for 15 seconds... 00:33:46.310 10869.00 IOPS, 42.46 MiB/s [2024-11-20T07:30:51.614Z] 11194.00 IOPS, 43.73 MiB/s [2024-11-20T07:30:51.614Z] 08:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2178762 00:33:46.885 08:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:46.885 [2024-11-20 08:30:51.587170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.885 [2024-11-20 08:30:51.587578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.885 [2024-11-20 08:30:51.587587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.587992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.587999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.588016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.588033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.588050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.588066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.588083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.588100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.588117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.588136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.588152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.588169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.588186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.588203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.588220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.588237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.886 [2024-11-20 08:30:51.588254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.886 [2024-11-20 08:30:51.588264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.887 [2024-11-20 08:30:51.588815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.887 [2024-11-20 08:30:51.588824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.588832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.588841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.588848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.588858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.588869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.588879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.588886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.588896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.588903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.588912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.588920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.588929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.588936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.588945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.588953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.588962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.588969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.588979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.588987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.588998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.888 [2024-11-20 08:30:51.589328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.888 [2024-11-20 08:30:51.589337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.889 [2024-11-20 08:30:51.589344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.889 [2024-11-20 08:30:51.589354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.889 [2024-11-20 08:30:51.589361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.889 [2024-11-20 08:30:51.589371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.889 [2024-11-20 08:30:51.589378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.889 [2024-11-20 08:30:51.589388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.889 [2024-11-20 08:30:51.589395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.889 [2024-11-20 08:30:51.589405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.889 [2024-11-20 08:30:51.589412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.889 [2024-11-20 08:30:51.589423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.889 [2024-11-20 08:30:51.589430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.889 [2024-11-20 08:30:51.589440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.889 [2024-11-20 08:30:51.589447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.889 [2024-11-20 08:30:51.589457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.889 [2024-11-20 08:30:51.589464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.889 [2024-11-20 08:30:51.589473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.889 [2024-11-20 08:30:51.589480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.889 [2024-11-20 08:30:51.589490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.889 [2024-11-20 08:30:51.589497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.889 [2024-11-20 08:30:51.589507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.889 [2024-11-20 08:30:51.589514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.889 [2024-11-20 08:30:51.589523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c40de0 is same with the state(6) to be set 00:33:46.889 [2024-11-20 08:30:51.589533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.889 [2024-11-20 08:30:51.589539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.889 [2024-11-20 08:30:51.589547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100944 len:8 PRP1 0x0 PRP2 0x0 00:33:46.889 [2024-11-20 08:30:51.589554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.889 [2024-11-20 08:30:51.593188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:46.889 [2024-11-20 08:30:51.593242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:46.889 [2024-11-20 08:30:51.594063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.889 [2024-11-20 08:30:51.594101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:46.889 [2024-11-20 08:30:51.594114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:46.889 [2024-11-20 08:30:51.594353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:46.889 [2024-11-20 08:30:51.594575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:46.889 [2024-11-20 08:30:51.594584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:46.889 [2024-11-20 08:30:51.594593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:46.889 [2024-11-20 08:30:51.594603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:46.889 [2024-11-20 08:30:51.607387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:46.889 [2024-11-20 08:30:51.607997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.889 [2024-11-20 08:30:51.608036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:46.889 [2024-11-20 08:30:51.608049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.151 [2024-11-20 08:30:51.608288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.151 [2024-11-20 08:30:51.608509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.151 [2024-11-20 08:30:51.608518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.151 [2024-11-20 08:30:51.608526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.151 [2024-11-20 08:30:51.608534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.151 [2024-11-20 08:30:51.621138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.151 [2024-11-20 08:30:51.621784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.151 [2024-11-20 08:30:51.621822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.151 [2024-11-20 08:30:51.621834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.151 [2024-11-20 08:30:51.622078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.151 [2024-11-20 08:30:51.622299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.151 [2024-11-20 08:30:51.622307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.151 [2024-11-20 08:30:51.622315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.151 [2024-11-20 08:30:51.622323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.151 [2024-11-20 08:30:51.634874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.151 [2024-11-20 08:30:51.635513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.151 [2024-11-20 08:30:51.635550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.151 [2024-11-20 08:30:51.635562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.151 [2024-11-20 08:30:51.635797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.152 [2024-11-20 08:30:51.636026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.152 [2024-11-20 08:30:51.636036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.152 [2024-11-20 08:30:51.636044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.152 [2024-11-20 08:30:51.636052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.152 [2024-11-20 08:30:51.648795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.152 [2024-11-20 08:30:51.649471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.152 [2024-11-20 08:30:51.649508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.152 [2024-11-20 08:30:51.649529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.152 [2024-11-20 08:30:51.649764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.152 [2024-11-20 08:30:51.649992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.152 [2024-11-20 08:30:51.650002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.152 [2024-11-20 08:30:51.650010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.152 [2024-11-20 08:30:51.650018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.152 [2024-11-20 08:30:51.662561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.152 [2024-11-20 08:30:51.663252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.152 [2024-11-20 08:30:51.663290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.152 [2024-11-20 08:30:51.663302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.152 [2024-11-20 08:30:51.663537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.152 [2024-11-20 08:30:51.663757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.152 [2024-11-20 08:30:51.663765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.152 [2024-11-20 08:30:51.663773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.152 [2024-11-20 08:30:51.663781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.152 [2024-11-20 08:30:51.676385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.152 [2024-11-20 08:30:51.676958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.152 [2024-11-20 08:30:51.676996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.152 [2024-11-20 08:30:51.677008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.152 [2024-11-20 08:30:51.677247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.152 [2024-11-20 08:30:51.677467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.152 [2024-11-20 08:30:51.677475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.152 [2024-11-20 08:30:51.677483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.152 [2024-11-20 08:30:51.677491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.152 [2024-11-20 08:30:51.690248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.152 [2024-11-20 08:30:51.690938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.152 [2024-11-20 08:30:51.690976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.152 [2024-11-20 08:30:51.690989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.152 [2024-11-20 08:30:51.691227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.152 [2024-11-20 08:30:51.691451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.152 [2024-11-20 08:30:51.691468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.152 [2024-11-20 08:30:51.691476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.152 [2024-11-20 08:30:51.691484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.152 [2024-11-20 08:30:51.704035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.152 [2024-11-20 08:30:51.704687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.152 [2024-11-20 08:30:51.704724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.152 [2024-11-20 08:30:51.704735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.152 [2024-11-20 08:30:51.704978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.152 [2024-11-20 08:30:51.705199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.152 [2024-11-20 08:30:51.705209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.152 [2024-11-20 08:30:51.705217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.152 [2024-11-20 08:30:51.705225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.152 [2024-11-20 08:30:51.717769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.152 [2024-11-20 08:30:51.718390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.152 [2024-11-20 08:30:51.718428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.152 [2024-11-20 08:30:51.718439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.152 [2024-11-20 08:30:51.718674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.152 [2024-11-20 08:30:51.718903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.152 [2024-11-20 08:30:51.718912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.152 [2024-11-20 08:30:51.718920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.152 [2024-11-20 08:30:51.718928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.152 [2024-11-20 08:30:51.731691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.152 [2024-11-20 08:30:51.732332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.152 [2024-11-20 08:30:51.732369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.152 [2024-11-20 08:30:51.732380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.152 [2024-11-20 08:30:51.732615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.152 [2024-11-20 08:30:51.732835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.152 [2024-11-20 08:30:51.732844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.152 [2024-11-20 08:30:51.732856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.152 [2024-11-20 08:30:51.732873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.152 [2024-11-20 08:30:51.745620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.152 [2024-11-20 08:30:51.746187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.152 [2024-11-20 08:30:51.746225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.152 [2024-11-20 08:30:51.746238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.152 [2024-11-20 08:30:51.746474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.152 [2024-11-20 08:30:51.746695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.152 [2024-11-20 08:30:51.746705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.152 [2024-11-20 08:30:51.746713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.152 [2024-11-20 08:30:51.746720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.152 [2024-11-20 08:30:51.759470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.152 [2024-11-20 08:30:51.759974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.152 [2024-11-20 08:30:51.760012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.152 [2024-11-20 08:30:51.760024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.152 [2024-11-20 08:30:51.760262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.152 [2024-11-20 08:30:51.760481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.152 [2024-11-20 08:30:51.760490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.152 [2024-11-20 08:30:51.760498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.152 [2024-11-20 08:30:51.760505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.152 [2024-11-20 08:30:51.773260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.152 [2024-11-20 08:30:51.773793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.152 [2024-11-20 08:30:51.773831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.152 [2024-11-20 08:30:51.773842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.153 [2024-11-20 08:30:51.774085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.153 [2024-11-20 08:30:51.774307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.153 [2024-11-20 08:30:51.774315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.153 [2024-11-20 08:30:51.774323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.153 [2024-11-20 08:30:51.774331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.153 [2024-11-20 08:30:51.787084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.153 [2024-11-20 08:30:51.787738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.153 [2024-11-20 08:30:51.787776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.153 [2024-11-20 08:30:51.787787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.153 [2024-11-20 08:30:51.788031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.153 [2024-11-20 08:30:51.788253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.153 [2024-11-20 08:30:51.788261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.153 [2024-11-20 08:30:51.788269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.153 [2024-11-20 08:30:51.788277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.153 [2024-11-20 08:30:51.800815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.153 [2024-11-20 08:30:51.801454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.153 [2024-11-20 08:30:51.801491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.153 [2024-11-20 08:30:51.801502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.153 [2024-11-20 08:30:51.801737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.153 [2024-11-20 08:30:51.801966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.153 [2024-11-20 08:30:51.801976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.153 [2024-11-20 08:30:51.801984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.153 [2024-11-20 08:30:51.801991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.153 [2024-11-20 08:30:51.814737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.153 [2024-11-20 08:30:51.815307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.153 [2024-11-20 08:30:51.815344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.153 [2024-11-20 08:30:51.815355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.153 [2024-11-20 08:30:51.815590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.153 [2024-11-20 08:30:51.815810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.153 [2024-11-20 08:30:51.815819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.153 [2024-11-20 08:30:51.815827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.153 [2024-11-20 08:30:51.815835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.153 [2024-11-20 08:30:51.828638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.153 [2024-11-20 08:30:51.829257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.153 [2024-11-20 08:30:51.829295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.153 [2024-11-20 08:30:51.829310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.153 [2024-11-20 08:30:51.829545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.153 [2024-11-20 08:30:51.829764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.153 [2024-11-20 08:30:51.829773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.153 [2024-11-20 08:30:51.829781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.153 [2024-11-20 08:30:51.829789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.153 [2024-11-20 08:30:51.842543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.153 [2024-11-20 08:30:51.843059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.153 [2024-11-20 08:30:51.843096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.153 [2024-11-20 08:30:51.843107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.153 [2024-11-20 08:30:51.843341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.153 [2024-11-20 08:30:51.843560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.153 [2024-11-20 08:30:51.843569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.153 [2024-11-20 08:30:51.843577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.153 [2024-11-20 08:30:51.843585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.153 [2024-11-20 08:30:51.856354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.153 [2024-11-20 08:30:51.856833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.153 [2024-11-20 08:30:51.856852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.153 [2024-11-20 08:30:51.856867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.153 [2024-11-20 08:30:51.857083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.153 [2024-11-20 08:30:51.857299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.153 [2024-11-20 08:30:51.857308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.153 [2024-11-20 08:30:51.857315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.153 [2024-11-20 08:30:51.857322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.153 [2024-11-20 08:30:51.870268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.153 [2024-11-20 08:30:51.870794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.153 [2024-11-20 08:30:51.870811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.153 [2024-11-20 08:30:51.870818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.153 [2024-11-20 08:30:51.871040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.153 [2024-11-20 08:30:51.871260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.153 [2024-11-20 08:30:51.871269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.153 [2024-11-20 08:30:51.871277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.153 [2024-11-20 08:30:51.871285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.416 [2024-11-20 08:30:51.884019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.416 [2024-11-20 08:30:51.884549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.416 [2024-11-20 08:30:51.884565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.416 [2024-11-20 08:30:51.884573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.416 [2024-11-20 08:30:51.884788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.416 [2024-11-20 08:30:51.885008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.416 [2024-11-20 08:30:51.885017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.416 [2024-11-20 08:30:51.885024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.416 [2024-11-20 08:30:51.885031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.416 [2024-11-20 08:30:51.897762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.416 [2024-11-20 08:30:51.898368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.416 [2024-11-20 08:30:51.898405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.416 [2024-11-20 08:30:51.898418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.416 [2024-11-20 08:30:51.898653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.416 [2024-11-20 08:30:51.898882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.416 [2024-11-20 08:30:51.898893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.416 [2024-11-20 08:30:51.898901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.416 [2024-11-20 08:30:51.898909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.416 [2024-11-20 08:30:51.911655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.416 [2024-11-20 08:30:51.912316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.416 [2024-11-20 08:30:51.912353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.416 [2024-11-20 08:30:51.912364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.416 [2024-11-20 08:30:51.912599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.416 [2024-11-20 08:30:51.912819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.416 [2024-11-20 08:30:51.912828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.416 [2024-11-20 08:30:51.912840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.416 [2024-11-20 08:30:51.912849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.416 [2024-11-20 08:30:51.925411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.416 [2024-11-20 08:30:51.926139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.416 [2024-11-20 08:30:51.926177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.416 [2024-11-20 08:30:51.926188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.416 [2024-11-20 08:30:51.926423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.416 [2024-11-20 08:30:51.926643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.416 [2024-11-20 08:30:51.926651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.416 [2024-11-20 08:30:51.926659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.416 [2024-11-20 08:30:51.926667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.416 [2024-11-20 08:30:51.939306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.416 [2024-11-20 08:30:51.939961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.416 [2024-11-20 08:30:51.940000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.416 [2024-11-20 08:30:51.940012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.416 [2024-11-20 08:30:51.940249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.416 [2024-11-20 08:30:51.940469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.416 [2024-11-20 08:30:51.940478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.416 [2024-11-20 08:30:51.940486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.416 [2024-11-20 08:30:51.940494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.416 [2024-11-20 08:30:51.953045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.416 [2024-11-20 08:30:51.953713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.416 [2024-11-20 08:30:51.953751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.416 [2024-11-20 08:30:51.953762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.416 [2024-11-20 08:30:51.954006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.416 [2024-11-20 08:30:51.954226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.416 [2024-11-20 08:30:51.954235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.416 [2024-11-20 08:30:51.954243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.416 [2024-11-20 08:30:51.954251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.416 [2024-11-20 08:30:51.966801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.416 [2024-11-20 08:30:51.967478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.416 [2024-11-20 08:30:51.967516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.416 [2024-11-20 08:30:51.967526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.416 [2024-11-20 08:30:51.967761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.416 [2024-11-20 08:30:51.967990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.416 [2024-11-20 08:30:51.968000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.416 [2024-11-20 08:30:51.968009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.416 [2024-11-20 08:30:51.968017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.416 [2024-11-20 08:30:51.980558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.416 [2024-11-20 08:30:51.981205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.416 [2024-11-20 08:30:51.981242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.416 [2024-11-20 08:30:51.981253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.416 [2024-11-20 08:30:51.981488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.416 [2024-11-20 08:30:51.981708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.416 [2024-11-20 08:30:51.981717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.416 [2024-11-20 08:30:51.981725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.417 [2024-11-20 08:30:51.981733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.417 [2024-11-20 08:30:51.994487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.417 [2024-11-20 08:30:51.995161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.417 [2024-11-20 08:30:51.995199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.417 [2024-11-20 08:30:51.995210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.417 [2024-11-20 08:30:51.995445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.417 [2024-11-20 08:30:51.995664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.417 [2024-11-20 08:30:51.995673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.417 [2024-11-20 08:30:51.995681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.417 [2024-11-20 08:30:51.995689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.417 [2024-11-20 08:30:52.008237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.417 [2024-11-20 08:30:52.008924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.417 [2024-11-20 08:30:52.008962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.417 [2024-11-20 08:30:52.008978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.417 [2024-11-20 08:30:52.009213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.417 [2024-11-20 08:30:52.009439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.417 [2024-11-20 08:30:52.009449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.417 [2024-11-20 08:30:52.009457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.417 [2024-11-20 08:30:52.009465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.417 9627.33 IOPS, 37.61 MiB/s [2024-11-20T07:30:52.146Z] [2024-11-20 08:30:52.022039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.417 [2024-11-20 08:30:52.022670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.417 [2024-11-20 08:30:52.022709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.417 [2024-11-20 08:30:52.022720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.417 [2024-11-20 08:30:52.022962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.417 [2024-11-20 08:30:52.023183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.417 [2024-11-20 08:30:52.023192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.417 [2024-11-20 08:30:52.023200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.417 [2024-11-20 08:30:52.023208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.417 [2024-11-20 08:30:52.035791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.417 [2024-11-20 08:30:52.036476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.417 [2024-11-20 08:30:52.036514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.417 [2024-11-20 08:30:52.036525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.417 [2024-11-20 08:30:52.036759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.417 [2024-11-20 08:30:52.036988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.417 [2024-11-20 08:30:52.036998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.417 [2024-11-20 08:30:52.037006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.417 [2024-11-20 08:30:52.037014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.417 [2024-11-20 08:30:52.049552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.417 [2024-11-20 08:30:52.050226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.417 [2024-11-20 08:30:52.050264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.417 [2024-11-20 08:30:52.050276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.417 [2024-11-20 08:30:52.050511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.417 [2024-11-20 08:30:52.050735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.417 [2024-11-20 08:30:52.050744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.417 [2024-11-20 08:30:52.050752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.417 [2024-11-20 08:30:52.050760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.417 [2024-11-20 08:30:52.063312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.417 [2024-11-20 08:30:52.063984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.417 [2024-11-20 08:30:52.064022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.417 [2024-11-20 08:30:52.064035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.417 [2024-11-20 08:30:52.064273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.417 [2024-11-20 08:30:52.064493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.417 [2024-11-20 08:30:52.064501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.417 [2024-11-20 08:30:52.064509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.417 [2024-11-20 08:30:52.064517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.417 [2024-11-20 08:30:52.077064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.417 [2024-11-20 08:30:52.077742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.417 [2024-11-20 08:30:52.077780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.417 [2024-11-20 08:30:52.077791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.417 [2024-11-20 08:30:52.078033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.417 [2024-11-20 08:30:52.078254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.417 [2024-11-20 08:30:52.078263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.417 [2024-11-20 08:30:52.078270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.417 [2024-11-20 08:30:52.078278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.417 [2024-11-20 08:30:52.090816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.417 [2024-11-20 08:30:52.091428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.417 [2024-11-20 08:30:52.091466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.417 [2024-11-20 08:30:52.091477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.417 [2024-11-20 08:30:52.091711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.417 [2024-11-20 08:30:52.091940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.417 [2024-11-20 08:30:52.091951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.417 [2024-11-20 08:30:52.091963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.417 [2024-11-20 08:30:52.091971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.417 [2024-11-20 08:30:52.104716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.417 [2024-11-20 08:30:52.105349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.417 [2024-11-20 08:30:52.105387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.417 [2024-11-20 08:30:52.105399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.417 [2024-11-20 08:30:52.105636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.417 [2024-11-20 08:30:52.105855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.417 [2024-11-20 08:30:52.105873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.417 [2024-11-20 08:30:52.105881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.417 [2024-11-20 08:30:52.105889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.417 [2024-11-20 08:30:52.118642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.417 [2024-11-20 08:30:52.119273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.417 [2024-11-20 08:30:52.119311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.417 [2024-11-20 08:30:52.119322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.417 [2024-11-20 08:30:52.119557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.417 [2024-11-20 08:30:52.119777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.418 [2024-11-20 08:30:52.119786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.418 [2024-11-20 08:30:52.119794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.418 [2024-11-20 08:30:52.119802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.418 [2024-11-20 08:30:52.132578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.418 [2024-11-20 08:30:52.133221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.418 [2024-11-20 08:30:52.133259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.418 [2024-11-20 08:30:52.133270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.418 [2024-11-20 08:30:52.133505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.418 [2024-11-20 08:30:52.133724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.418 [2024-11-20 08:30:52.133733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.418 [2024-11-20 08:30:52.133741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.418 [2024-11-20 08:30:52.133749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.682 [2024-11-20 08:30:52.146511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.682 [2024-11-20 08:30:52.147183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.682 [2024-11-20 08:30:52.147221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.682 [2024-11-20 08:30:52.147233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.682 [2024-11-20 08:30:52.147467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.682 [2024-11-20 08:30:52.147687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.682 [2024-11-20 08:30:52.147696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.682 [2024-11-20 08:30:52.147703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.682 [2024-11-20 08:30:52.147711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.682 [2024-11-20 08:30:52.160257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.682 [2024-11-20 08:30:52.160921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.682 [2024-11-20 08:30:52.160959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.682 [2024-11-20 08:30:52.160971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.682 [2024-11-20 08:30:52.161210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.682 [2024-11-20 08:30:52.161429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.682 [2024-11-20 08:30:52.161439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.682 [2024-11-20 08:30:52.161447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.682 [2024-11-20 08:30:52.161455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.682 [2024-11-20 08:30:52.174006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.682 [2024-11-20 08:30:52.174675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.682 [2024-11-20 08:30:52.174712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.682 [2024-11-20 08:30:52.174723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.682 [2024-11-20 08:30:52.174966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.682 [2024-11-20 08:30:52.175187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.682 [2024-11-20 08:30:52.175195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.682 [2024-11-20 08:30:52.175203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.682 [2024-11-20 08:30:52.175211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.682 [2024-11-20 08:30:52.187750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.682 [2024-11-20 08:30:52.188440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.682 [2024-11-20 08:30:52.188478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.682 [2024-11-20 08:30:52.188493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.682 [2024-11-20 08:30:52.188728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.682 [2024-11-20 08:30:52.188956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.682 [2024-11-20 08:30:52.188966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.682 [2024-11-20 08:30:52.188974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.682 [2024-11-20 08:30:52.188981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.682 [2024-11-20 08:30:52.201522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.682 [2024-11-20 08:30:52.202174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.682 [2024-11-20 08:30:52.202212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.682 [2024-11-20 08:30:52.202223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.682 [2024-11-20 08:30:52.202458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.682 [2024-11-20 08:30:52.202677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.682 [2024-11-20 08:30:52.202686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.682 [2024-11-20 08:30:52.202694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.682 [2024-11-20 08:30:52.202702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.682 [2024-11-20 08:30:52.215257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.682 [2024-11-20 08:30:52.215839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.682 [2024-11-20 08:30:52.215859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.682 [2024-11-20 08:30:52.215872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.682 [2024-11-20 08:30:52.216088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.682 [2024-11-20 08:30:52.216304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.682 [2024-11-20 08:30:52.216312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.682 [2024-11-20 08:30:52.216319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.682 [2024-11-20 08:30:52.216326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.682 [2024-11-20 08:30:52.229079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.682 [2024-11-20 08:30:52.229653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.682 [2024-11-20 08:30:52.229670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.682 [2024-11-20 08:30:52.229677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.682 [2024-11-20 08:30:52.229897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.682 [2024-11-20 08:30:52.230117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.682 [2024-11-20 08:30:52.230126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.682 [2024-11-20 08:30:52.230133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.682 [2024-11-20 08:30:52.230140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.682 [2024-11-20 08:30:52.242917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.682 [2024-11-20 08:30:52.243445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.682 [2024-11-20 08:30:52.243462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.682 [2024-11-20 08:30:52.243470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.682 [2024-11-20 08:30:52.243685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.682 [2024-11-20 08:30:52.243907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.682 [2024-11-20 08:30:52.243916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.682 [2024-11-20 08:30:52.243923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.682 [2024-11-20 08:30:52.243931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.683 [2024-11-20 08:30:52.256675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.683 [2024-11-20 08:30:52.257280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.683 [2024-11-20 08:30:52.257317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.683 [2024-11-20 08:30:52.257328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.683 [2024-11-20 08:30:52.257564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.683 [2024-11-20 08:30:52.257784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.683 [2024-11-20 08:30:52.257792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.683 [2024-11-20 08:30:52.257800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.683 [2024-11-20 08:30:52.257808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.683 [2024-11-20 08:30:52.270564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.683 [2024-11-20 08:30:52.271152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.683 [2024-11-20 08:30:52.271173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.683 [2024-11-20 08:30:52.271181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.683 [2024-11-20 08:30:52.271397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.683 [2024-11-20 08:30:52.271613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.683 [2024-11-20 08:30:52.271622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.683 [2024-11-20 08:30:52.271634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.683 [2024-11-20 08:30:52.271641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.683 [2024-11-20 08:30:52.284385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.683 [2024-11-20 08:30:52.284843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.683 [2024-11-20 08:30:52.284860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.683 [2024-11-20 08:30:52.284874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.683 [2024-11-20 08:30:52.285089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.683 [2024-11-20 08:30:52.285305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.683 [2024-11-20 08:30:52.285314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.683 [2024-11-20 08:30:52.285321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.683 [2024-11-20 08:30:52.285327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.683 [2024-11-20 08:30:52.298264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.683 [2024-11-20 08:30:52.298788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.683 [2024-11-20 08:30:52.298804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.683 [2024-11-20 08:30:52.298812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.683 [2024-11-20 08:30:52.299032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.683 [2024-11-20 08:30:52.299249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.683 [2024-11-20 08:30:52.299257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.683 [2024-11-20 08:30:52.299264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.683 [2024-11-20 08:30:52.299271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.683 [2024-11-20 08:30:52.312054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.683 [2024-11-20 08:30:52.312747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.683 [2024-11-20 08:30:52.312785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.683 [2024-11-20 08:30:52.312796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.683 [2024-11-20 08:30:52.313038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.683 [2024-11-20 08:30:52.313260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.683 [2024-11-20 08:30:52.313268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.683 [2024-11-20 08:30:52.313277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.683 [2024-11-20 08:30:52.313285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.683 [2024-11-20 08:30:52.325849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.683 [2024-11-20 08:30:52.326447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.683 [2024-11-20 08:30:52.326485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.683 [2024-11-20 08:30:52.326496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.683 [2024-11-20 08:30:52.326731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.683 [2024-11-20 08:30:52.326958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.683 [2024-11-20 08:30:52.326968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.683 [2024-11-20 08:30:52.326976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.683 [2024-11-20 08:30:52.326983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.683 [2024-11-20 08:30:52.339747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.683 [2024-11-20 08:30:52.340293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.683 [2024-11-20 08:30:52.340313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.683 [2024-11-20 08:30:52.340321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.683 [2024-11-20 08:30:52.340537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.683 [2024-11-20 08:30:52.340754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.683 [2024-11-20 08:30:52.340763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.683 [2024-11-20 08:30:52.340770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.683 [2024-11-20 08:30:52.340776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.683 [2024-11-20 08:30:52.353524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.683 [2024-11-20 08:30:52.354001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.683 [2024-11-20 08:30:52.354040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.683 [2024-11-20 08:30:52.354052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.683 [2024-11-20 08:30:52.354290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.683 [2024-11-20 08:30:52.354510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.683 [2024-11-20 08:30:52.354519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.683 [2024-11-20 08:30:52.354527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.683 [2024-11-20 08:30:52.354534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.683 [2024-11-20 08:30:52.367296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.683 [2024-11-20 08:30:52.367947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.683 [2024-11-20 08:30:52.367984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.683 [2024-11-20 08:30:52.368000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.683 [2024-11-20 08:30:52.368235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.683 [2024-11-20 08:30:52.368454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.683 [2024-11-20 08:30:52.368463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.683 [2024-11-20 08:30:52.368472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.683 [2024-11-20 08:30:52.368479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.683 [2024-11-20 08:30:52.381033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.684 [2024-11-20 08:30:52.381655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.684 [2024-11-20 08:30:52.381693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.684 [2024-11-20 08:30:52.381704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.684 [2024-11-20 08:30:52.381947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.684 [2024-11-20 08:30:52.382167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.684 [2024-11-20 08:30:52.382176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.684 [2024-11-20 08:30:52.382184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.684 [2024-11-20 08:30:52.382192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.684 [2024-11-20 08:30:52.394944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.684 [2024-11-20 08:30:52.395521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.684 [2024-11-20 08:30:52.395541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.684 [2024-11-20 08:30:52.395549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.684 [2024-11-20 08:30:52.395765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.684 [2024-11-20 08:30:52.395986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.684 [2024-11-20 08:30:52.395995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.684 [2024-11-20 08:30:52.396003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.684 [2024-11-20 08:30:52.396010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.947 [2024-11-20 08:30:52.408757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.947 [2024-11-20 08:30:52.409284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.947 [2024-11-20 08:30:52.409302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.947 [2024-11-20 08:30:52.409310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.947 [2024-11-20 08:30:52.409526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.947 [2024-11-20 08:30:52.409746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.947 [2024-11-20 08:30:52.409755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.947 [2024-11-20 08:30:52.409762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.947 [2024-11-20 08:30:52.409768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.947 [2024-11-20 08:30:52.422523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.947 [2024-11-20 08:30:52.423055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.947 [2024-11-20 08:30:52.423072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.947 [2024-11-20 08:30:52.423080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.947 [2024-11-20 08:30:52.423296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.947 [2024-11-20 08:30:52.423511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.947 [2024-11-20 08:30:52.423520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.947 [2024-11-20 08:30:52.423528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.947 [2024-11-20 08:30:52.423534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.947 [2024-11-20 08:30:52.436289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.947 [2024-11-20 08:30:52.436933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.947 [2024-11-20 08:30:52.436970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.947 [2024-11-20 08:30:52.436982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.947 [2024-11-20 08:30:52.437216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.947 [2024-11-20 08:30:52.437436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.947 [2024-11-20 08:30:52.437445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.947 [2024-11-20 08:30:52.437453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.947 [2024-11-20 08:30:52.437462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.947 [2024-11-20 08:30:52.450047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.947 [2024-11-20 08:30:52.450718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.947 [2024-11-20 08:30:52.450755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.947 [2024-11-20 08:30:52.450767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.947 [2024-11-20 08:30:52.451008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.947 [2024-11-20 08:30:52.451229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.947 [2024-11-20 08:30:52.451237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.947 [2024-11-20 08:30:52.451250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.947 [2024-11-20 08:30:52.451257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.947 [2024-11-20 08:30:52.463798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.947 [2024-11-20 08:30:52.464336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.947 [2024-11-20 08:30:52.464357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.947 [2024-11-20 08:30:52.464366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.947 [2024-11-20 08:30:52.464581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.947 [2024-11-20 08:30:52.464798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.947 [2024-11-20 08:30:52.464807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.947 [2024-11-20 08:30:52.464814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.947 [2024-11-20 08:30:52.464821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.947 [2024-11-20 08:30:52.477566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.947 [2024-11-20 08:30:52.478094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.947 [2024-11-20 08:30:52.478111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.948 [2024-11-20 08:30:52.478118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.948 [2024-11-20 08:30:52.478333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.948 [2024-11-20 08:30:52.478552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.948 [2024-11-20 08:30:52.478561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.948 [2024-11-20 08:30:52.478568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.948 [2024-11-20 08:30:52.478574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.948 [2024-11-20 08:30:52.491320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.948 [2024-11-20 08:30:52.491839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.948 [2024-11-20 08:30:52.491855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.948 [2024-11-20 08:30:52.491868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.948 [2024-11-20 08:30:52.492083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.948 [2024-11-20 08:30:52.492299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.948 [2024-11-20 08:30:52.492308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.948 [2024-11-20 08:30:52.492315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.948 [2024-11-20 08:30:52.492322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.948 [2024-11-20 08:30:52.505069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.948 [2024-11-20 08:30:52.505596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.948 [2024-11-20 08:30:52.505612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.948 [2024-11-20 08:30:52.505620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.948 [2024-11-20 08:30:52.505834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.948 [2024-11-20 08:30:52.506055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.948 [2024-11-20 08:30:52.506071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.948 [2024-11-20 08:30:52.506078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.948 [2024-11-20 08:30:52.506085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.948 [2024-11-20 08:30:52.518827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.948 [2024-11-20 08:30:52.519398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.948 [2024-11-20 08:30:52.519415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.948 [2024-11-20 08:30:52.519422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.948 [2024-11-20 08:30:52.519637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.948 [2024-11-20 08:30:52.519853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.948 [2024-11-20 08:30:52.519861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.948 [2024-11-20 08:30:52.519873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.948 [2024-11-20 08:30:52.519880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.948 [2024-11-20 08:30:52.532644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.948 [2024-11-20 08:30:52.533263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.948 [2024-11-20 08:30:52.533301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.948 [2024-11-20 08:30:52.533312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.948 [2024-11-20 08:30:52.533547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.948 [2024-11-20 08:30:52.533766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.948 [2024-11-20 08:30:52.533775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.948 [2024-11-20 08:30:52.533783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.948 [2024-11-20 08:30:52.533791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.948 [2024-11-20 08:30:52.546556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.948 [2024-11-20 08:30:52.547242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.948 [2024-11-20 08:30:52.547280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.948 [2024-11-20 08:30:52.547296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.948 [2024-11-20 08:30:52.547530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.948 [2024-11-20 08:30:52.547750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.948 [2024-11-20 08:30:52.547759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.948 [2024-11-20 08:30:52.547767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.948 [2024-11-20 08:30:52.547775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.948 [2024-11-20 08:30:52.560332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.948 [2024-11-20 08:30:52.560948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.948 [2024-11-20 08:30:52.560986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.948 [2024-11-20 08:30:52.560998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.948 [2024-11-20 08:30:52.561236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.948 [2024-11-20 08:30:52.561455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.948 [2024-11-20 08:30:52.561464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.948 [2024-11-20 08:30:52.561472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.948 [2024-11-20 08:30:52.561480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.948 [2024-11-20 08:30:52.574240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.948 [2024-11-20 08:30:52.574797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.948 [2024-11-20 08:30:52.574836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.948 [2024-11-20 08:30:52.574848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.948 [2024-11-20 08:30:52.575092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.948 [2024-11-20 08:30:52.575313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.948 [2024-11-20 08:30:52.575323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.948 [2024-11-20 08:30:52.575331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.948 [2024-11-20 08:30:52.575338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.948 [2024-11-20 08:30:52.588096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.948 [2024-11-20 08:30:52.588611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.948 [2024-11-20 08:30:52.588648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.948 [2024-11-20 08:30:52.588660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.948 [2024-11-20 08:30:52.588906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.949 [2024-11-20 08:30:52.589133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.949 [2024-11-20 08:30:52.589142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.949 [2024-11-20 08:30:52.589149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.949 [2024-11-20 08:30:52.589157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.949 [2024-11-20 08:30:52.601911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.949 [2024-11-20 08:30:52.602451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.949 [2024-11-20 08:30:52.602471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.949 [2024-11-20 08:30:52.602479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.949 [2024-11-20 08:30:52.602696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.949 [2024-11-20 08:30:52.602918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.949 [2024-11-20 08:30:52.602927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.949 [2024-11-20 08:30:52.602935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.949 [2024-11-20 08:30:52.602942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.949 [2024-11-20 08:30:52.615689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.949 [2024-11-20 08:30:52.616148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.949 [2024-11-20 08:30:52.616165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.949 [2024-11-20 08:30:52.616172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.949 [2024-11-20 08:30:52.616388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.949 [2024-11-20 08:30:52.616603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.949 [2024-11-20 08:30:52.616613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.949 [2024-11-20 08:30:52.616620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.949 [2024-11-20 08:30:52.616627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.949 [2024-11-20 08:30:52.629488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.949 [2024-11-20 08:30:52.630123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.949 [2024-11-20 08:30:52.630161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.949 [2024-11-20 08:30:52.630172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.949 [2024-11-20 08:30:52.630407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.949 [2024-11-20 08:30:52.630627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.949 [2024-11-20 08:30:52.630636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.949 [2024-11-20 08:30:52.630648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.949 [2024-11-20 08:30:52.630657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.949 [2024-11-20 08:30:52.643416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.949 [2024-11-20 08:30:52.644154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.949 [2024-11-20 08:30:52.644192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.949 [2024-11-20 08:30:52.644203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.949 [2024-11-20 08:30:52.644438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.949 [2024-11-20 08:30:52.644657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.949 [2024-11-20 08:30:52.644666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.949 [2024-11-20 08:30:52.644674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.949 [2024-11-20 08:30:52.644682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.949 [2024-11-20 08:30:52.657262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.949 [2024-11-20 08:30:52.657924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.949 [2024-11-20 08:30:52.657962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.949 [2024-11-20 08:30:52.657975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.949 [2024-11-20 08:30:52.658210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.949 [2024-11-20 08:30:52.658430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.949 [2024-11-20 08:30:52.658439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:47.949 [2024-11-20 08:30:52.658447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:47.949 [2024-11-20 08:30:52.658455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:47.949 [2024-11-20 08:30:52.671010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:47.949 [2024-11-20 08:30:52.671602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.949 [2024-11-20 08:30:52.671639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:47.949 [2024-11-20 08:30:52.671650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:47.949 [2024-11-20 08:30:52.671893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:47.949 [2024-11-20 08:30:52.672114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:47.949 [2024-11-20 08:30:52.672123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.213 [2024-11-20 08:30:52.672131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.213 [2024-11-20 08:30:52.672141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.213 [2024-11-20 08:30:52.684901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.213 [2024-11-20 08:30:52.685525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.213 [2024-11-20 08:30:52.685563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.213 [2024-11-20 08:30:52.685574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.213 [2024-11-20 08:30:52.685808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.213 [2024-11-20 08:30:52.686035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.213 [2024-11-20 08:30:52.686046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.213 [2024-11-20 08:30:52.686054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.213 [2024-11-20 08:30:52.686062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.213 [2024-11-20 08:30:52.698812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.213 [2024-11-20 08:30:52.699460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.213 [2024-11-20 08:30:52.699480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.213 [2024-11-20 08:30:52.699489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.213 [2024-11-20 08:30:52.699705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.213 [2024-11-20 08:30:52.699927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.213 [2024-11-20 08:30:52.699936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.213 [2024-11-20 08:30:52.699944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.213 [2024-11-20 08:30:52.699951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.213 [2024-11-20 08:30:52.712691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.213 [2024-11-20 08:30:52.713193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.213 [2024-11-20 08:30:52.713232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.213 [2024-11-20 08:30:52.713244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.213 [2024-11-20 08:30:52.713481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.213 [2024-11-20 08:30:52.713700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.213 [2024-11-20 08:30:52.713709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.213 [2024-11-20 08:30:52.713717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.213 [2024-11-20 08:30:52.713725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.213 [2024-11-20 08:30:52.726496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.213 [2024-11-20 08:30:52.727184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.213 [2024-11-20 08:30:52.727222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.213 [2024-11-20 08:30:52.727239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.213 [2024-11-20 08:30:52.727474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.213 [2024-11-20 08:30:52.727694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.213 [2024-11-20 08:30:52.727703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.213 [2024-11-20 08:30:52.727711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.213 [2024-11-20 08:30:52.727719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.213 [2024-11-20 08:30:52.740285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.213 [2024-11-20 08:30:52.740975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.213 [2024-11-20 08:30:52.741013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.213 [2024-11-20 08:30:52.741024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.213 [2024-11-20 08:30:52.741259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.213 [2024-11-20 08:30:52.741478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.213 [2024-11-20 08:30:52.741487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.213 [2024-11-20 08:30:52.741495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.213 [2024-11-20 08:30:52.741503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.213 [2024-11-20 08:30:52.754057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.213 [2024-11-20 08:30:52.754638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.213 [2024-11-20 08:30:52.754657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.213 [2024-11-20 08:30:52.754665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.213 [2024-11-20 08:30:52.754889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.213 [2024-11-20 08:30:52.755105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.213 [2024-11-20 08:30:52.755114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.213 [2024-11-20 08:30:52.755121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.213 [2024-11-20 08:30:52.755128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.213 [2024-11-20 08:30:52.767876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.213 [2024-11-20 08:30:52.768402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.213 [2024-11-20 08:30:52.768419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.213 [2024-11-20 08:30:52.768427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.213 [2024-11-20 08:30:52.768642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.213 [2024-11-20 08:30:52.768867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.213 [2024-11-20 08:30:52.768877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.213 [2024-11-20 08:30:52.768884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.213 [2024-11-20 08:30:52.768891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.213 [2024-11-20 08:30:52.781631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.213 [2024-11-20 08:30:52.782184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.213 [2024-11-20 08:30:52.782222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.213 [2024-11-20 08:30:52.782233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.213 [2024-11-20 08:30:52.782468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.214 [2024-11-20 08:30:52.782688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.214 [2024-11-20 08:30:52.782696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.214 [2024-11-20 08:30:52.782704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.214 [2024-11-20 08:30:52.782712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.214 [2024-11-20 08:30:52.795469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.214 [2024-11-20 08:30:52.796174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.214 [2024-11-20 08:30:52.796212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.214 [2024-11-20 08:30:52.796223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.214 [2024-11-20 08:30:52.796457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.214 [2024-11-20 08:30:52.796677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.214 [2024-11-20 08:30:52.796686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.214 [2024-11-20 08:30:52.796694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.214 [2024-11-20 08:30:52.796702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.214 [2024-11-20 08:30:52.809256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.214 [2024-11-20 08:30:52.809886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.214 [2024-11-20 08:30:52.809924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.214 [2024-11-20 08:30:52.809937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.214 [2024-11-20 08:30:52.810173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.214 [2024-11-20 08:30:52.810392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.214 [2024-11-20 08:30:52.810402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.214 [2024-11-20 08:30:52.810414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.214 [2024-11-20 08:30:52.810422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.214 [2024-11-20 08:30:52.823196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.214 [2024-11-20 08:30:52.823885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.214 [2024-11-20 08:30:52.823923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.214 [2024-11-20 08:30:52.823936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.214 [2024-11-20 08:30:52.824171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.214 [2024-11-20 08:30:52.824390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.214 [2024-11-20 08:30:52.824400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.214 [2024-11-20 08:30:52.824409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.214 [2024-11-20 08:30:52.824417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.214 [2024-11-20 08:30:52.836981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.214 [2024-11-20 08:30:52.837610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.214 [2024-11-20 08:30:52.837648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.214 [2024-11-20 08:30:52.837659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.214 [2024-11-20 08:30:52.837902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.214 [2024-11-20 08:30:52.838122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.214 [2024-11-20 08:30:52.838131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.214 [2024-11-20 08:30:52.838139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.214 [2024-11-20 08:30:52.838147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.214 [2024-11-20 08:30:52.850895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.214 [2024-11-20 08:30:52.851440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.214 [2024-11-20 08:30:52.851459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.214 [2024-11-20 08:30:52.851467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.214 [2024-11-20 08:30:52.851683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.214 [2024-11-20 08:30:52.851904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.214 [2024-11-20 08:30:52.851913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.214 [2024-11-20 08:30:52.851920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.214 [2024-11-20 08:30:52.851927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.214 [2024-11-20 08:30:52.864708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.214 [2024-11-20 08:30:52.865252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.214 [2024-11-20 08:30:52.865270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.214 [2024-11-20 08:30:52.865278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.214 [2024-11-20 08:30:52.865493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.214 [2024-11-20 08:30:52.865709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.214 [2024-11-20 08:30:52.865717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.214 [2024-11-20 08:30:52.865724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.214 [2024-11-20 08:30:52.865731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.214 [2024-11-20 08:30:52.878477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.214 [2024-11-20 08:30:52.879010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.214 [2024-11-20 08:30:52.879048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.214 [2024-11-20 08:30:52.879060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.214 [2024-11-20 08:30:52.879298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.214 [2024-11-20 08:30:52.879518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.214 [2024-11-20 08:30:52.879527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.214 [2024-11-20 08:30:52.879535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.214 [2024-11-20 08:30:52.879543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.214 [2024-11-20 08:30:52.892305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.214 [2024-11-20 08:30:52.892993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.214 [2024-11-20 08:30:52.893030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.214 [2024-11-20 08:30:52.893043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.214 [2024-11-20 08:30:52.893279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.214 [2024-11-20 08:30:52.893499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.214 [2024-11-20 08:30:52.893507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.214 [2024-11-20 08:30:52.893516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.214 [2024-11-20 08:30:52.893524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.214 [2024-11-20 08:30:52.906074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.214 [2024-11-20 08:30:52.906626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.215 [2024-11-20 08:30:52.906645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.215 [2024-11-20 08:30:52.906659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.215 [2024-11-20 08:30:52.906881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.215 [2024-11-20 08:30:52.907098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.215 [2024-11-20 08:30:52.907112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.215 [2024-11-20 08:30:52.907119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.215 [2024-11-20 08:30:52.907126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.215 [2024-11-20 08:30:52.919871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.215 [2024-11-20 08:30:52.920380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.215 [2024-11-20 08:30:52.920397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.215 [2024-11-20 08:30:52.920405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.215 [2024-11-20 08:30:52.920621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.215 [2024-11-20 08:30:52.920836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.215 [2024-11-20 08:30:52.920844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.215 [2024-11-20 08:30:52.920851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.215 [2024-11-20 08:30:52.920858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.215 [2024-11-20 08:30:52.933625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.215 [2024-11-20 08:30:52.934161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.215 [2024-11-20 08:30:52.934178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.215 [2024-11-20 08:30:52.934186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.215 [2024-11-20 08:30:52.934401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.215 [2024-11-20 08:30:52.934616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.215 [2024-11-20 08:30:52.934625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.215 [2024-11-20 08:30:52.934632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.215 [2024-11-20 08:30:52.934639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.478 [2024-11-20 08:30:52.947388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.478 [2024-11-20 08:30:52.948113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-11-20 08:30:52.948150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.478 [2024-11-20 08:30:52.948163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.478 [2024-11-20 08:30:52.948398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.478 [2024-11-20 08:30:52.948622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.478 [2024-11-20 08:30:52.948632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.478 [2024-11-20 08:30:52.948640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.478 [2024-11-20 08:30:52.948647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.478 [2024-11-20 08:30:52.961197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.478 [2024-11-20 08:30:52.961748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-11-20 08:30:52.961767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.478 [2024-11-20 08:30:52.961775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.478 [2024-11-20 08:30:52.961997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.478 [2024-11-20 08:30:52.962213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.478 [2024-11-20 08:30:52.962222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.478 [2024-11-20 08:30:52.962229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.478 [2024-11-20 08:30:52.962235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.478 [2024-11-20 08:30:52.974977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.478 [2024-11-20 08:30:52.975654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-11-20 08:30:52.975691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.478 [2024-11-20 08:30:52.975702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.478 [2024-11-20 08:30:52.975945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.478 [2024-11-20 08:30:52.976166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.478 [2024-11-20 08:30:52.976174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.478 [2024-11-20 08:30:52.976182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.478 [2024-11-20 08:30:52.976190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.478 [2024-11-20 08:30:52.988730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.478 [2024-11-20 08:30:52.989377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-11-20 08:30:52.989415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.478 [2024-11-20 08:30:52.989427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.478 [2024-11-20 08:30:52.989661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.478 [2024-11-20 08:30:52.989890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.478 [2024-11-20 08:30:52.989900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.478 [2024-11-20 08:30:52.989917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.479 [2024-11-20 08:30:52.989925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.479 [2024-11-20 08:30:53.002465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.479 [2024-11-20 08:30:53.003159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-11-20 08:30:53.003198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.479 [2024-11-20 08:30:53.003209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.479 [2024-11-20 08:30:53.003443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.479 [2024-11-20 08:30:53.003663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.479 [2024-11-20 08:30:53.003671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.479 [2024-11-20 08:30:53.003679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.479 [2024-11-20 08:30:53.003687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.479 7220.50 IOPS, 28.21 MiB/s [2024-11-20T07:30:53.208Z] [2024-11-20 08:30:53.017899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.479 [2024-11-20 08:30:53.018544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-11-20 08:30:53.018582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.479 [2024-11-20 08:30:53.018593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.479 [2024-11-20 08:30:53.018827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.479 [2024-11-20 08:30:53.019057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.479 [2024-11-20 08:30:53.019067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.479 [2024-11-20 08:30:53.019075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.479 [2024-11-20 08:30:53.019083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.479 [2024-11-20 08:30:53.031647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.479 [2024-11-20 08:30:53.032300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-11-20 08:30:53.032337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.479 [2024-11-20 08:30:53.032348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.479 [2024-11-20 08:30:53.032583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.479 [2024-11-20 08:30:53.032802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.479 [2024-11-20 08:30:53.032811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.479 [2024-11-20 08:30:53.032819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.479 [2024-11-20 08:30:53.032827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.479 [2024-11-20 08:30:53.045580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.479 [2024-11-20 08:30:53.046211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-11-20 08:30:53.046249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.479 [2024-11-20 08:30:53.046260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.479 [2024-11-20 08:30:53.046495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.479 [2024-11-20 08:30:53.046715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.479 [2024-11-20 08:30:53.046723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.479 [2024-11-20 08:30:53.046731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.479 [2024-11-20 08:30:53.046739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.479 [2024-11-20 08:30:53.059493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.479 [2024-11-20 08:30:53.060165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-11-20 08:30:53.060203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.479 [2024-11-20 08:30:53.060215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.479 [2024-11-20 08:30:53.060449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.479 [2024-11-20 08:30:53.060669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.479 [2024-11-20 08:30:53.060678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.479 [2024-11-20 08:30:53.060686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.479 [2024-11-20 08:30:53.060693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.479 [2024-11-20 08:30:53.073271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.479 [2024-11-20 08:30:53.073946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-11-20 08:30:53.073984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.479 [2024-11-20 08:30:53.073995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.479 [2024-11-20 08:30:53.074230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.479 [2024-11-20 08:30:53.074449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.479 [2024-11-20 08:30:53.074458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.479 [2024-11-20 08:30:53.074466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.479 [2024-11-20 08:30:53.074474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.479 [2024-11-20 08:30:53.087025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.479 [2024-11-20 08:30:53.087714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-11-20 08:30:53.087757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.479 [2024-11-20 08:30:53.087769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.479 [2024-11-20 08:30:53.088015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.479 [2024-11-20 08:30:53.088236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.479 [2024-11-20 08:30:53.088246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.479 [2024-11-20 08:30:53.088254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.479 [2024-11-20 08:30:53.088262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.479 [2024-11-20 08:30:53.100814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.479 [2024-11-20 08:30:53.101491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-11-20 08:30:53.101529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.479 [2024-11-20 08:30:53.101540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.479 [2024-11-20 08:30:53.101775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.479 [2024-11-20 08:30:53.102004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.480 [2024-11-20 08:30:53.102013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.480 [2024-11-20 08:30:53.102021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.480 [2024-11-20 08:30:53.102029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.480 [2024-11-20 08:30:53.114591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.480 [2024-11-20 08:30:53.115245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-11-20 08:30:53.115282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.480 [2024-11-20 08:30:53.115294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.480 [2024-11-20 08:30:53.115528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.480 [2024-11-20 08:30:53.115748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.480 [2024-11-20 08:30:53.115756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.480 [2024-11-20 08:30:53.115764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.480 [2024-11-20 08:30:53.115772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.480 [2024-11-20 08:30:53.128352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.480 [2024-11-20 08:30:53.128958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-11-20 08:30:53.128996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.480 [2024-11-20 08:30:53.129009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.480 [2024-11-20 08:30:53.129249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.480 [2024-11-20 08:30:53.129469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.480 [2024-11-20 08:30:53.129478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.480 [2024-11-20 08:30:53.129487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.480 [2024-11-20 08:30:53.129495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.480 [2024-11-20 08:30:53.142245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.480 [2024-11-20 08:30:53.142896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-11-20 08:30:53.142933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.480 [2024-11-20 08:30:53.142944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.480 [2024-11-20 08:30:53.143179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.480 [2024-11-20 08:30:53.143399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.480 [2024-11-20 08:30:53.143408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.480 [2024-11-20 08:30:53.143415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.480 [2024-11-20 08:30:53.143423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.480 [2024-11-20 08:30:53.156174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.480 [2024-11-20 08:30:53.156877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-11-20 08:30:53.156914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.480 [2024-11-20 08:30:53.156926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.480 [2024-11-20 08:30:53.157163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.480 [2024-11-20 08:30:53.157382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.480 [2024-11-20 08:30:53.157391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.480 [2024-11-20 08:30:53.157400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.480 [2024-11-20 08:30:53.157407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.480 [2024-11-20 08:30:53.169959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.480 [2024-11-20 08:30:53.170489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-11-20 08:30:53.170526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.480 [2024-11-20 08:30:53.170537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.480 [2024-11-20 08:30:53.170772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.480 [2024-11-20 08:30:53.171002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.480 [2024-11-20 08:30:53.171012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.480 [2024-11-20 08:30:53.171024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.480 [2024-11-20 08:30:53.171033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.480 [2024-11-20 08:30:53.183783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.480 [2024-11-20 08:30:53.184430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-11-20 08:30:53.184468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.480 [2024-11-20 08:30:53.184479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.480 [2024-11-20 08:30:53.184714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.480 [2024-11-20 08:30:53.184943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.480 [2024-11-20 08:30:53.184953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.480 [2024-11-20 08:30:53.184961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.480 [2024-11-20 08:30:53.184968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.480 [2024-11-20 08:30:53.197721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.480 [2024-11-20 08:30:53.198307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-11-20 08:30:53.198327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.480 [2024-11-20 08:30:53.198335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.480 [2024-11-20 08:30:53.198551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.480 [2024-11-20 08:30:53.198767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.480 [2024-11-20 08:30:53.198776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.480 [2024-11-20 08:30:53.198783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.480 [2024-11-20 08:30:53.198790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.743 [2024-11-20 08:30:53.211541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.743 [2024-11-20 08:30:53.212084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.743 [2024-11-20 08:30:53.212122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.743 [2024-11-20 08:30:53.212134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.743 [2024-11-20 08:30:53.212368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.744 [2024-11-20 08:30:53.212588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.744 [2024-11-20 08:30:53.212597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.744 [2024-11-20 08:30:53.212605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.744 [2024-11-20 08:30:53.212612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.744 [2024-11-20 08:30:53.225383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.744 [2024-11-20 08:30:53.226016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.744 [2024-11-20 08:30:53.226054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.744 [2024-11-20 08:30:53.226065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.744 [2024-11-20 08:30:53.226299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.744 [2024-11-20 08:30:53.226519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.744 [2024-11-20 08:30:53.226527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.744 [2024-11-20 08:30:53.226536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.744 [2024-11-20 08:30:53.226544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.744 [2024-11-20 08:30:53.239308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.744 [2024-11-20 08:30:53.239972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.744 [2024-11-20 08:30:53.240010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.744 [2024-11-20 08:30:53.240023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.744 [2024-11-20 08:30:53.240261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.744 [2024-11-20 08:30:53.240480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.744 [2024-11-20 08:30:53.240489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.744 [2024-11-20 08:30:53.240497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.744 [2024-11-20 08:30:53.240505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.744 [2024-11-20 08:30:53.253065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.744 [2024-11-20 08:30:53.253722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.744 [2024-11-20 08:30:53.253760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.744 [2024-11-20 08:30:53.253772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.744 [2024-11-20 08:30:53.254017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.744 [2024-11-20 08:30:53.254238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.744 [2024-11-20 08:30:53.254247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.744 [2024-11-20 08:30:53.254255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.744 [2024-11-20 08:30:53.254263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.744 [2024-11-20 08:30:53.266802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.744 [2024-11-20 08:30:53.267471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.744 [2024-11-20 08:30:53.267514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.744 [2024-11-20 08:30:53.267525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.744 [2024-11-20 08:30:53.267760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.744 [2024-11-20 08:30:53.267990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.744 [2024-11-20 08:30:53.268000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.744 [2024-11-20 08:30:53.268008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.744 [2024-11-20 08:30:53.268016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.744 [2024-11-20 08:30:53.280582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.744 [2024-11-20 08:30:53.281215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.744 [2024-11-20 08:30:53.281253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.744 [2024-11-20 08:30:53.281264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.744 [2024-11-20 08:30:53.281499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.744 [2024-11-20 08:30:53.281718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.744 [2024-11-20 08:30:53.281728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.744 [2024-11-20 08:30:53.281736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.744 [2024-11-20 08:30:53.281743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.744 [2024-11-20 08:30:53.294489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.744 [2024-11-20 08:30:53.295160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.744 [2024-11-20 08:30:53.295198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.744 [2024-11-20 08:30:53.295209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.744 [2024-11-20 08:30:53.295444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.744 [2024-11-20 08:30:53.295664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.744 [2024-11-20 08:30:53.295672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.744 [2024-11-20 08:30:53.295681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.744 [2024-11-20 08:30:53.295689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.744 [2024-11-20 08:30:53.308229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.744 [2024-11-20 08:30:53.308887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.744 [2024-11-20 08:30:53.308925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.744 [2024-11-20 08:30:53.308936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.744 [2024-11-20 08:30:53.309171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.744 [2024-11-20 08:30:53.309396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.744 [2024-11-20 08:30:53.309405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.744 [2024-11-20 08:30:53.309413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.744 [2024-11-20 08:30:53.309421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.744 [2024-11-20 08:30:53.321968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.744 [2024-11-20 08:30:53.322602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.744 [2024-11-20 08:30:53.322639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.744 [2024-11-20 08:30:53.322650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.744 [2024-11-20 08:30:53.322895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.744 [2024-11-20 08:30:53.323116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.744 [2024-11-20 08:30:53.323125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.744 [2024-11-20 08:30:53.323132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.745 [2024-11-20 08:30:53.323140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.745 [2024-11-20 08:30:53.335904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.745 [2024-11-20 08:30:53.336457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.745 [2024-11-20 08:30:53.336476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.745 [2024-11-20 08:30:53.336484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.745 [2024-11-20 08:30:53.336701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.745 [2024-11-20 08:30:53.336924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.745 [2024-11-20 08:30:53.336933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.745 [2024-11-20 08:30:53.336940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.745 [2024-11-20 08:30:53.336947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.745 [2024-11-20 08:30:53.349683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.745 [2024-11-20 08:30:53.350212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.745 [2024-11-20 08:30:53.350229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.745 [2024-11-20 08:30:53.350237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.745 [2024-11-20 08:30:53.350452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.745 [2024-11-20 08:30:53.350667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.745 [2024-11-20 08:30:53.350676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.745 [2024-11-20 08:30:53.350688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.745 [2024-11-20 08:30:53.350695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.745 [2024-11-20 08:30:53.363429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.745 [2024-11-20 08:30:53.363957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.745 [2024-11-20 08:30:53.363974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.745 [2024-11-20 08:30:53.363982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.745 [2024-11-20 08:30:53.364197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.745 [2024-11-20 08:30:53.364412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.745 [2024-11-20 08:30:53.364421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.745 [2024-11-20 08:30:53.364428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.745 [2024-11-20 08:30:53.364435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.745 [2024-11-20 08:30:53.377174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.745 [2024-11-20 08:30:53.377641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.745 [2024-11-20 08:30:53.377657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.745 [2024-11-20 08:30:53.377665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.745 [2024-11-20 08:30:53.377888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.745 [2024-11-20 08:30:53.378105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.745 [2024-11-20 08:30:53.378113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.745 [2024-11-20 08:30:53.378121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.745 [2024-11-20 08:30:53.378127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.745 [2024-11-20 08:30:53.391074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.745 [2024-11-20 08:30:53.391690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.745 [2024-11-20 08:30:53.391727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.745 [2024-11-20 08:30:53.391738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.745 [2024-11-20 08:30:53.391981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.745 [2024-11-20 08:30:53.392202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.745 [2024-11-20 08:30:53.392211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.745 [2024-11-20 08:30:53.392219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.745 [2024-11-20 08:30:53.392227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.745 [2024-11-20 08:30:53.404974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.745 [2024-11-20 08:30:53.405607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.745 [2024-11-20 08:30:53.405645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.745 [2024-11-20 08:30:53.405656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.745 [2024-11-20 08:30:53.405900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.745 [2024-11-20 08:30:53.406121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.745 [2024-11-20 08:30:53.406131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.745 [2024-11-20 08:30:53.406138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.745 [2024-11-20 08:30:53.406146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.745 [2024-11-20 08:30:53.418891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.745 [2024-11-20 08:30:53.419525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.745 [2024-11-20 08:30:53.419562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.745 [2024-11-20 08:30:53.419574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.745 [2024-11-20 08:30:53.419808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.745 [2024-11-20 08:30:53.420037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.745 [2024-11-20 08:30:53.420047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.745 [2024-11-20 08:30:53.420056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.745 [2024-11-20 08:30:53.420064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.745 [2024-11-20 08:30:53.432626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.745 [2024-11-20 08:30:53.433278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.745 [2024-11-20 08:30:53.433316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.745 [2024-11-20 08:30:53.433327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.745 [2024-11-20 08:30:53.433562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.745 [2024-11-20 08:30:53.433781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.745 [2024-11-20 08:30:53.433790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.745 [2024-11-20 08:30:53.433798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.745 [2024-11-20 08:30:53.433806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.745 [2024-11-20 08:30:53.446558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.745 [2024-11-20 08:30:53.447160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.745 [2024-11-20 08:30:53.447198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.746 [2024-11-20 08:30:53.447213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.746 [2024-11-20 08:30:53.447448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.746 [2024-11-20 08:30:53.447668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.746 [2024-11-20 08:30:53.447677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.746 [2024-11-20 08:30:53.447684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.746 [2024-11-20 08:30:53.447692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:48.746 [2024-11-20 08:30:53.460445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:48.746 [2024-11-20 08:30:53.461111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.746 [2024-11-20 08:30:53.461149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:48.746 [2024-11-20 08:30:53.461160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:48.746 [2024-11-20 08:30:53.461395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:48.746 [2024-11-20 08:30:53.461615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:48.746 [2024-11-20 08:30:53.461624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:48.746 [2024-11-20 08:30:53.461632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:48.746 [2024-11-20 08:30:53.461640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.008 [2024-11-20 08:30:53.474198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.008 [2024-11-20 08:30:53.474776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.008 [2024-11-20 08:30:53.474795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.008 [2024-11-20 08:30:53.474803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.008 [2024-11-20 08:30:53.475026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.008 [2024-11-20 08:30:53.475242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.008 [2024-11-20 08:30:53.475251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.008 [2024-11-20 08:30:53.475258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.008 [2024-11-20 08:30:53.475265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.008 [2024-11-20 08:30:53.488030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.009 [2024-11-20 08:30:53.488540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.009 [2024-11-20 08:30:53.488558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.009 [2024-11-20 08:30:53.488565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.009 [2024-11-20 08:30:53.488781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.009 [2024-11-20 08:30:53.489008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.009 [2024-11-20 08:30:53.489017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.009 [2024-11-20 08:30:53.489025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.009 [2024-11-20 08:30:53.489031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.009 [2024-11-20 08:30:53.501783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.009 [2024-11-20 08:30:53.502413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.009 [2024-11-20 08:30:53.502452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.009 [2024-11-20 08:30:53.502463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.009 [2024-11-20 08:30:53.502698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.009 [2024-11-20 08:30:53.502926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.009 [2024-11-20 08:30:53.502937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.009 [2024-11-20 08:30:53.502945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.009 [2024-11-20 08:30:53.502953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.009 [2024-11-20 08:30:53.515518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.009 [2024-11-20 08:30:53.515994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.009 [2024-11-20 08:30:53.516032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.009 [2024-11-20 08:30:53.516045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.009 [2024-11-20 08:30:53.516283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.009 [2024-11-20 08:30:53.516503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.009 [2024-11-20 08:30:53.516512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.009 [2024-11-20 08:30:53.516520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.009 [2024-11-20 08:30:53.516528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.009 [2024-11-20 08:30:53.529301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.009 [2024-11-20 08:30:53.529851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.009 [2024-11-20 08:30:53.529875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.009 [2024-11-20 08:30:53.529883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.009 [2024-11-20 08:30:53.530101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.009 [2024-11-20 08:30:53.530317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.009 [2024-11-20 08:30:53.530325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.009 [2024-11-20 08:30:53.530337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.009 [2024-11-20 08:30:53.530344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.009 [2024-11-20 08:30:53.543109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.009 [2024-11-20 08:30:53.543725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.009 [2024-11-20 08:30:53.543762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.009 [2024-11-20 08:30:53.543774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.009 [2024-11-20 08:30:53.544017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.009 [2024-11-20 08:30:53.544238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.009 [2024-11-20 08:30:53.544248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.009 [2024-11-20 08:30:53.544256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.009 [2024-11-20 08:30:53.544264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.009 [2024-11-20 08:30:53.557010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.009 [2024-11-20 08:30:53.557680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.009 [2024-11-20 08:30:53.557717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.009 [2024-11-20 08:30:53.557728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.009 [2024-11-20 08:30:53.557972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.009 [2024-11-20 08:30:53.558193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.009 [2024-11-20 08:30:53.558202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.009 [2024-11-20 08:30:53.558210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.009 [2024-11-20 08:30:53.558218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.009 [2024-11-20 08:30:53.570748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.009 [2024-11-20 08:30:53.571389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.009 [2024-11-20 08:30:53.571427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.009 [2024-11-20 08:30:53.571438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.009 [2024-11-20 08:30:53.571673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.009 [2024-11-20 08:30:53.571902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.009 [2024-11-20 08:30:53.571912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.009 [2024-11-20 08:30:53.571920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.009 [2024-11-20 08:30:53.571928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.009 [2024-11-20 08:30:53.584673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.009 [2024-11-20 08:30:53.585244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.009 [2024-11-20 08:30:53.585282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.009 [2024-11-20 08:30:53.585293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.009 [2024-11-20 08:30:53.585528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.009 [2024-11-20 08:30:53.585747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.009 [2024-11-20 08:30:53.585757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.009 [2024-11-20 08:30:53.585764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.009 [2024-11-20 08:30:53.585772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.009 [2024-11-20 08:30:53.598523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.009 [2024-11-20 08:30:53.599161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.009 [2024-11-20 08:30:53.599201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.009 [2024-11-20 08:30:53.599214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.009 [2024-11-20 08:30:53.599450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.009 [2024-11-20 08:30:53.599670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.009 [2024-11-20 08:30:53.599680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.009 [2024-11-20 08:30:53.599688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.009 [2024-11-20 08:30:53.599696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.009 [2024-11-20 08:30:53.612444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.009 [2024-11-20 08:30:53.613102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.009 [2024-11-20 08:30:53.613140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.009 [2024-11-20 08:30:53.613151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.009 [2024-11-20 08:30:53.613385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.010 [2024-11-20 08:30:53.613605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.010 [2024-11-20 08:30:53.613614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.010 [2024-11-20 08:30:53.613622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.010 [2024-11-20 08:30:53.613630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.010 [2024-11-20 08:30:53.626187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.010 [2024-11-20 08:30:53.626858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.010 [2024-11-20 08:30:53.626902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.010 [2024-11-20 08:30:53.626918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.010 [2024-11-20 08:30:53.627152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.010 [2024-11-20 08:30:53.627371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.010 [2024-11-20 08:30:53.627380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.010 [2024-11-20 08:30:53.627388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.010 [2024-11-20 08:30:53.627396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.010 [2024-11-20 08:30:53.639959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.010 [2024-11-20 08:30:53.640498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.010 [2024-11-20 08:30:53.640517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.010 [2024-11-20 08:30:53.640525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.010 [2024-11-20 08:30:53.640741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.010 [2024-11-20 08:30:53.640962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.010 [2024-11-20 08:30:53.640972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.010 [2024-11-20 08:30:53.640979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.010 [2024-11-20 08:30:53.640986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.010 [2024-11-20 08:30:53.653811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.010 [2024-11-20 08:30:53.654478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.010 [2024-11-20 08:30:53.654516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.010 [2024-11-20 08:30:53.654527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.010 [2024-11-20 08:30:53.654761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.010 [2024-11-20 08:30:53.654991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.010 [2024-11-20 08:30:53.655000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.010 [2024-11-20 08:30:53.655008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.010 [2024-11-20 08:30:53.655016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.010 [2024-11-20 08:30:53.667548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.010 [2024-11-20 08:30:53.668182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.010 [2024-11-20 08:30:53.668220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.010 [2024-11-20 08:30:53.668231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.010 [2024-11-20 08:30:53.668466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.010 [2024-11-20 08:30:53.668694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.010 [2024-11-20 08:30:53.668703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.010 [2024-11-20 08:30:53.668711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.010 [2024-11-20 08:30:53.668719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.010 [2024-11-20 08:30:53.681470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.010 [2024-11-20 08:30:53.682131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.010 [2024-11-20 08:30:53.682169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.010 [2024-11-20 08:30:53.682180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.010 [2024-11-20 08:30:53.682415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.010 [2024-11-20 08:30:53.682635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.010 [2024-11-20 08:30:53.682643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.010 [2024-11-20 08:30:53.682651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.010 [2024-11-20 08:30:53.682659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.010 [2024-11-20 08:30:53.695233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.010 [2024-11-20 08:30:53.695888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.010 [2024-11-20 08:30:53.695925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.010 [2024-11-20 08:30:53.695936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.010 [2024-11-20 08:30:53.696171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.010 [2024-11-20 08:30:53.696391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.010 [2024-11-20 08:30:53.696399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.010 [2024-11-20 08:30:53.696407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.010 [2024-11-20 08:30:53.696415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.010 [2024-11-20 08:30:53.709163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.010 [2024-11-20 08:30:53.709798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.010 [2024-11-20 08:30:53.709836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.010 [2024-11-20 08:30:53.709849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.010 [2024-11-20 08:30:53.710094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.010 [2024-11-20 08:30:53.710315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.010 [2024-11-20 08:30:53.710324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.010 [2024-11-20 08:30:53.710336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.010 [2024-11-20 08:30:53.710344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.010 [2024-11-20 08:30:53.723173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.010 [2024-11-20 08:30:53.723805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.010 [2024-11-20 08:30:53.723843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.010 [2024-11-20 08:30:53.723854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.010 [2024-11-20 08:30:53.724098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.010 [2024-11-20 08:30:53.724319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.010 [2024-11-20 08:30:53.724328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.010 [2024-11-20 08:30:53.724336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.010 [2024-11-20 08:30:53.724344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.272 [2024-11-20 08:30:53.736937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.272 [2024-11-20 08:30:53.737540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.272 [2024-11-20 08:30:53.737578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.272 [2024-11-20 08:30:53.737589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.272 [2024-11-20 08:30:53.737825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.272 [2024-11-20 08:30:53.738053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.272 [2024-11-20 08:30:53.738064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.272 [2024-11-20 08:30:53.738072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.272 [2024-11-20 08:30:53.738080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.272 [2024-11-20 08:30:53.750852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.272 [2024-11-20 08:30:53.751440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.272 [2024-11-20 08:30:53.751459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.272 [2024-11-20 08:30:53.751467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.272 [2024-11-20 08:30:53.751685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.272 [2024-11-20 08:30:53.751907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.272 [2024-11-20 08:30:53.751916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.272 [2024-11-20 08:30:53.751923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.272 [2024-11-20 08:30:53.751930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.272 [2024-11-20 08:30:53.764689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.272 [2024-11-20 08:30:53.765234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.272 [2024-11-20 08:30:53.765252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.272 [2024-11-20 08:30:53.765260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.272 [2024-11-20 08:30:53.765475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.272 [2024-11-20 08:30:53.765691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.272 [2024-11-20 08:30:53.765700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.272 [2024-11-20 08:30:53.765707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.272 [2024-11-20 08:30:53.765714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.272 [2024-11-20 08:30:53.778464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.272 [2024-11-20 08:30:53.779095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.273 [2024-11-20 08:30:53.779132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.273 [2024-11-20 08:30:53.779143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.273 [2024-11-20 08:30:53.779379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.273 [2024-11-20 08:30:53.779598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.273 [2024-11-20 08:30:53.779607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.273 [2024-11-20 08:30:53.779615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.273 [2024-11-20 08:30:53.779623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.273 [2024-11-20 08:30:53.792375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.273 [2024-11-20 08:30:53.793040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.273 [2024-11-20 08:30:53.793078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.273 [2024-11-20 08:30:53.793089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.273 [2024-11-20 08:30:53.793324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.273 [2024-11-20 08:30:53.793544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.273 [2024-11-20 08:30:53.793552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.273 [2024-11-20 08:30:53.793560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.273 [2024-11-20 08:30:53.793568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.273 [2024-11-20 08:30:53.806310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.273 [2024-11-20 08:30:53.806942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.273 [2024-11-20 08:30:53.806979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.273 [2024-11-20 08:30:53.806996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.273 [2024-11-20 08:30:53.807230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.273 [2024-11-20 08:30:53.807450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.273 [2024-11-20 08:30:53.807459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.273 [2024-11-20 08:30:53.807467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.273 [2024-11-20 08:30:53.807475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.273 [2024-11-20 08:30:53.820241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.273 [2024-11-20 08:30:53.820889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.273 [2024-11-20 08:30:53.820927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.273 [2024-11-20 08:30:53.820939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.273 [2024-11-20 08:30:53.821176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.273 [2024-11-20 08:30:53.821396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.273 [2024-11-20 08:30:53.821406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.273 [2024-11-20 08:30:53.821414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.273 [2024-11-20 08:30:53.821422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.273 [2024-11-20 08:30:53.833987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.273 [2024-11-20 08:30:53.834531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.273 [2024-11-20 08:30:53.834568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.273 [2024-11-20 08:30:53.834581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.273 [2024-11-20 08:30:53.834819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.273 [2024-11-20 08:30:53.835047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.273 [2024-11-20 08:30:53.835057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.273 [2024-11-20 08:30:53.835065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.273 [2024-11-20 08:30:53.835073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.273 [2024-11-20 08:30:53.847828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.273 [2024-11-20 08:30:53.848514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.273 [2024-11-20 08:30:53.848552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.273 [2024-11-20 08:30:53.848563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.273 [2024-11-20 08:30:53.848798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.273 [2024-11-20 08:30:53.849031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.273 [2024-11-20 08:30:53.849041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.273 [2024-11-20 08:30:53.849049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.273 [2024-11-20 08:30:53.849057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.273 [2024-11-20 08:30:53.861596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.273 [2024-11-20 08:30:53.862229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.273 [2024-11-20 08:30:53.862267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.273 [2024-11-20 08:30:53.862278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.273 [2024-11-20 08:30:53.862513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.273 [2024-11-20 08:30:53.862732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.273 [2024-11-20 08:30:53.862741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.273 [2024-11-20 08:30:53.862749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.273 [2024-11-20 08:30:53.862757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.273 [2024-11-20 08:30:53.875507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.273 [2024-11-20 08:30:53.876110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.273 [2024-11-20 08:30:53.876148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.273 [2024-11-20 08:30:53.876159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.273 [2024-11-20 08:30:53.876394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.273 [2024-11-20 08:30:53.876613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.273 [2024-11-20 08:30:53.876622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.273 [2024-11-20 08:30:53.876630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.273 [2024-11-20 08:30:53.876638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.273 [2024-11-20 08:30:53.889388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.273 [2024-11-20 08:30:53.889971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.273 [2024-11-20 08:30:53.890009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.273 [2024-11-20 08:30:53.890021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.273 [2024-11-20 08:30:53.890259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.273 [2024-11-20 08:30:53.890478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.273 [2024-11-20 08:30:53.890487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.273 [2024-11-20 08:30:53.890500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.273 [2024-11-20 08:30:53.890509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.273 [2024-11-20 08:30:53.903294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.273 [2024-11-20 08:30:53.903881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.273 [2024-11-20 08:30:53.903901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.273 [2024-11-20 08:30:53.903909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.273 [2024-11-20 08:30:53.904126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.273 [2024-11-20 08:30:53.904341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.273 [2024-11-20 08:30:53.904349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.273 [2024-11-20 08:30:53.904357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.273 [2024-11-20 08:30:53.904364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.273 [2024-11-20 08:30:53.917139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.274 [2024-11-20 08:30:53.917745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.274 [2024-11-20 08:30:53.917782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.274 [2024-11-20 08:30:53.917794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.274 [2024-11-20 08:30:53.918037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.274 [2024-11-20 08:30:53.918258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.274 [2024-11-20 08:30:53.918266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.274 [2024-11-20 08:30:53.918274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.274 [2024-11-20 08:30:53.918282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.274 [2024-11-20 08:30:53.931043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.274 [2024-11-20 08:30:53.931717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.274 [2024-11-20 08:30:53.931754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.274 [2024-11-20 08:30:53.931766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.274 [2024-11-20 08:30:53.932009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.274 [2024-11-20 08:30:53.932230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.274 [2024-11-20 08:30:53.932239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.274 [2024-11-20 08:30:53.932247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.274 [2024-11-20 08:30:53.932255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.274 [2024-11-20 08:30:53.944796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.274 [2024-11-20 08:30:53.945475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.274 [2024-11-20 08:30:53.945512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.274 [2024-11-20 08:30:53.945523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.274 [2024-11-20 08:30:53.945758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.274 [2024-11-20 08:30:53.945988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.274 [2024-11-20 08:30:53.946005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.274 [2024-11-20 08:30:53.946014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.274 [2024-11-20 08:30:53.946022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.274 [2024-11-20 08:30:53.958560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.274 [2024-11-20 08:30:53.959194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.274 [2024-11-20 08:30:53.959232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.274 [2024-11-20 08:30:53.959243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.274 [2024-11-20 08:30:53.959478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.274 [2024-11-20 08:30:53.959698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.274 [2024-11-20 08:30:53.959707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.274 [2024-11-20 08:30:53.959715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.274 [2024-11-20 08:30:53.959723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.274 [2024-11-20 08:30:53.972494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.274 [2024-11-20 08:30:53.973160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.274 [2024-11-20 08:30:53.973198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.274 [2024-11-20 08:30:53.973209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.274 [2024-11-20 08:30:53.973445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.274 [2024-11-20 08:30:53.973664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.274 [2024-11-20 08:30:53.973673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.274 [2024-11-20 08:30:53.973681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.274 [2024-11-20 08:30:53.973689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.274 [2024-11-20 08:30:53.986276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.274 [2024-11-20 08:30:53.986962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.274 [2024-11-20 08:30:53.987001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.274 [2024-11-20 08:30:53.987018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.274 [2024-11-20 08:30:53.987254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.274 [2024-11-20 08:30:53.987474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.274 [2024-11-20 08:30:53.987483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.274 [2024-11-20 08:30:53.987491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.274 [2024-11-20 08:30:53.987499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.536 [2024-11-20 08:30:54.000044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.536 [2024-11-20 08:30:54.000585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-11-20 08:30:54.000605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.536 [2024-11-20 08:30:54.000613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.536 [2024-11-20 08:30:54.000829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.536 [2024-11-20 08:30:54.001051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.536 [2024-11-20 08:30:54.001066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.536 [2024-11-20 08:30:54.001074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.536 [2024-11-20 08:30:54.001082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.536 [2024-11-20 08:30:54.013822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.536 [2024-11-20 08:30:54.014357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-11-20 08:30:54.014374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.536 [2024-11-20 08:30:54.014382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.536 [2024-11-20 08:30:54.014597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.536 [2024-11-20 08:30:54.014817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.536 [2024-11-20 08:30:54.014827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.536 [2024-11-20 08:30:54.014834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.536 [2024-11-20 08:30:54.014841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.536 5776.40 IOPS, 22.56 MiB/s [2024-11-20T07:30:54.265Z] [2024-11-20 08:30:54.027593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.536 [2024-11-20 08:30:54.028080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-11-20 08:30:54.028118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.536 [2024-11-20 08:30:54.028129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.536 [2024-11-20 08:30:54.028369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.536 [2024-11-20 08:30:54.028588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.536 [2024-11-20 08:30:54.028597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.536 [2024-11-20 08:30:54.028605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.536 [2024-11-20 08:30:54.028613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.536 [2024-11-20 08:30:54.041370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.536 [2024-11-20 08:30:54.041989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-11-20 08:30:54.042028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.536 [2024-11-20 08:30:54.042040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.536 [2024-11-20 08:30:54.042278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.536 [2024-11-20 08:30:54.042498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.536 [2024-11-20 08:30:54.042507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.536 [2024-11-20 08:30:54.042515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.536 [2024-11-20 08:30:54.042523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.536 [2024-11-20 08:30:54.055282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.536 [2024-11-20 08:30:54.055756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-11-20 08:30:54.055775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.536 [2024-11-20 08:30:54.055784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.536 [2024-11-20 08:30:54.056006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.536 [2024-11-20 08:30:54.056222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.536 [2024-11-20 08:30:54.056237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.536 [2024-11-20 08:30:54.056245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.536 [2024-11-20 08:30:54.056252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.536 [2024-11-20 08:30:54.069213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.536 [2024-11-20 08:30:54.069741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-11-20 08:30:54.069758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.536 [2024-11-20 08:30:54.069765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.536 [2024-11-20 08:30:54.069986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.536 [2024-11-20 08:30:54.070202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.536 [2024-11-20 08:30:54.070210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.536 [2024-11-20 08:30:54.070222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.536 [2024-11-20 08:30:54.070229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.536 [2024-11-20 08:30:54.083018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.536 [2024-11-20 08:30:54.083670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-11-20 08:30:54.083707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.536 [2024-11-20 08:30:54.083718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.536 [2024-11-20 08:30:54.083962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.536 [2024-11-20 08:30:54.084183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.536 [2024-11-20 08:30:54.084191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.536 [2024-11-20 08:30:54.084200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.536 [2024-11-20 08:30:54.084208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.536 [2024-11-20 08:30:54.096765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.536 [2024-11-20 08:30:54.097443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.536 [2024-11-20 08:30:54.097481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.536 [2024-11-20 08:30:54.097492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.536 [2024-11-20 08:30:54.097727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.536 [2024-11-20 08:30:54.097957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.536 [2024-11-20 08:30:54.097967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.536 [2024-11-20 08:30:54.097975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.536 [2024-11-20 08:30:54.097983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.536 [2024-11-20 08:30:54.110577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.537 [2024-11-20 08:30:54.111162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-11-20 08:30:54.111182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.537 [2024-11-20 08:30:54.111190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.537 [2024-11-20 08:30:54.111407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.537 [2024-11-20 08:30:54.111623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.537 [2024-11-20 08:30:54.111632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.537 [2024-11-20 08:30:54.111639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.537 [2024-11-20 08:30:54.111646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.537 [2024-11-20 08:30:54.124423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.537 [2024-11-20 08:30:54.124965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-11-20 08:30:54.125004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.537 [2024-11-20 08:30:54.125016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.537 [2024-11-20 08:30:54.125254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.537 [2024-11-20 08:30:54.125474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.537 [2024-11-20 08:30:54.125483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.537 [2024-11-20 08:30:54.125491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.537 [2024-11-20 08:30:54.125499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.537 [2024-11-20 08:30:54.138287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.537 [2024-11-20 08:30:54.138872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-11-20 08:30:54.138892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.537 [2024-11-20 08:30:54.138901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.537 [2024-11-20 08:30:54.139117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.537 [2024-11-20 08:30:54.139333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.537 [2024-11-20 08:30:54.139342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.537 [2024-11-20 08:30:54.139349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.537 [2024-11-20 08:30:54.139356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.537 [2024-11-20 08:30:54.152127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.537 [2024-11-20 08:30:54.152695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-11-20 08:30:54.152732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.537 [2024-11-20 08:30:54.152745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.537 [2024-11-20 08:30:54.152991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.537 [2024-11-20 08:30:54.153212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.537 [2024-11-20 08:30:54.153222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.537 [2024-11-20 08:30:54.153230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.537 [2024-11-20 08:30:54.153238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.537 [2024-11-20 08:30:54.166011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.537 [2024-11-20 08:30:54.166536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-11-20 08:30:54.166560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.537 [2024-11-20 08:30:54.166569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.537 [2024-11-20 08:30:54.166785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.537 [2024-11-20 08:30:54.167010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.537 [2024-11-20 08:30:54.167020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.537 [2024-11-20 08:30:54.167027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.537 [2024-11-20 08:30:54.167034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.537 [2024-11-20 08:30:54.179797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.537 [2024-11-20 08:30:54.180369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-11-20 08:30:54.180387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.537 [2024-11-20 08:30:54.180394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.537 [2024-11-20 08:30:54.180609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.537 [2024-11-20 08:30:54.180825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.537 [2024-11-20 08:30:54.180833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.537 [2024-11-20 08:30:54.180840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.537 [2024-11-20 08:30:54.180847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.537 [2024-11-20 08:30:54.193609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.537 [2024-11-20 08:30:54.194112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-11-20 08:30:54.194128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.537 [2024-11-20 08:30:54.194136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.537 [2024-11-20 08:30:54.194351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.537 [2024-11-20 08:30:54.194567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.537 [2024-11-20 08:30:54.194576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.537 [2024-11-20 08:30:54.194583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.537 [2024-11-20 08:30:54.194589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.537 [2024-11-20 08:30:54.207348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.537 [2024-11-20 08:30:54.207873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-11-20 08:30:54.207891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.537 [2024-11-20 08:30:54.207898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.537 [2024-11-20 08:30:54.208117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.537 [2024-11-20 08:30:54.208333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.537 [2024-11-20 08:30:54.208341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.537 [2024-11-20 08:30:54.208348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.537 [2024-11-20 08:30:54.208355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.537 [2024-11-20 08:30:54.221121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.537 [2024-11-20 08:30:54.221643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-11-20 08:30:54.221659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.537 [2024-11-20 08:30:54.221666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.537 [2024-11-20 08:30:54.221888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.537 [2024-11-20 08:30:54.222105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.537 [2024-11-20 08:30:54.222114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.537 [2024-11-20 08:30:54.222121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.537 [2024-11-20 08:30:54.222128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.537 [2024-11-20 08:30:54.234956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.537 [2024-11-20 08:30:54.235483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.537 [2024-11-20 08:30:54.235500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.537 [2024-11-20 08:30:54.235507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.537 [2024-11-20 08:30:54.235723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.537 [2024-11-20 08:30:54.235944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.537 [2024-11-20 08:30:54.235959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.538 [2024-11-20 08:30:54.235966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.538 [2024-11-20 08:30:54.235973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.538 [2024-11-20 08:30:54.248731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.538 [2024-11-20 08:30:54.249348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.538 [2024-11-20 08:30:54.249364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.538 [2024-11-20 08:30:54.249372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.538 [2024-11-20 08:30:54.249587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.538 [2024-11-20 08:30:54.249802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.538 [2024-11-20 08:30:54.249811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.538 [2024-11-20 08:30:54.249822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.538 [2024-11-20 08:30:54.249828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.800 [2024-11-20 08:30:54.262599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.800 [2024-11-20 08:30:54.263052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.800 [2024-11-20 08:30:54.263069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.800 [2024-11-20 08:30:54.263077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.800 [2024-11-20 08:30:54.263292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.800 [2024-11-20 08:30:54.263507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.800 [2024-11-20 08:30:54.263516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.800 [2024-11-20 08:30:54.263523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.800 [2024-11-20 08:30:54.263530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.800 [2024-11-20 08:30:54.276498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.800 [2024-11-20 08:30:54.276925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.800 [2024-11-20 08:30:54.276941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.800 [2024-11-20 08:30:54.276949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.800 [2024-11-20 08:30:54.277165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.800 [2024-11-20 08:30:54.277379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.800 [2024-11-20 08:30:54.277388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.800 [2024-11-20 08:30:54.277395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.800 [2024-11-20 08:30:54.277402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.800 [2024-11-20 08:30:54.290372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.800 [2024-11-20 08:30:54.290910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.800 [2024-11-20 08:30:54.290935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.800 [2024-11-20 08:30:54.290944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.800 [2024-11-20 08:30:54.291163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.800 [2024-11-20 08:30:54.291380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.800 [2024-11-20 08:30:54.291389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.800 [2024-11-20 08:30:54.291396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.800 [2024-11-20 08:30:54.291403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.800 [2024-11-20 08:30:54.304158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.800 [2024-11-20 08:30:54.304678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.800 [2024-11-20 08:30:54.304695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.800 [2024-11-20 08:30:54.304703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.800 [2024-11-20 08:30:54.304923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.800 [2024-11-20 08:30:54.305140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.800 [2024-11-20 08:30:54.305149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.800 [2024-11-20 08:30:54.305156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.800 [2024-11-20 08:30:54.305163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.800 [2024-11-20 08:30:54.317949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.800 [2024-11-20 08:30:54.318442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.800 [2024-11-20 08:30:54.318459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.800 [2024-11-20 08:30:54.318467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.800 [2024-11-20 08:30:54.318683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.800 [2024-11-20 08:30:54.318906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.800 [2024-11-20 08:30:54.318915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.800 [2024-11-20 08:30:54.318922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.800 [2024-11-20 08:30:54.318928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.800 [2024-11-20 08:30:54.331704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.800 [2024-11-20 08:30:54.332242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.800 [2024-11-20 08:30:54.332259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.800 [2024-11-20 08:30:54.332267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.800 [2024-11-20 08:30:54.332482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.800 [2024-11-20 08:30:54.332698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.800 [2024-11-20 08:30:54.332706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.800 [2024-11-20 08:30:54.332713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.800 [2024-11-20 08:30:54.332720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.800 [2024-11-20 08:30:54.345480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.800 [2024-11-20 08:30:54.346010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.800 [2024-11-20 08:30:54.346035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.800 [2024-11-20 08:30:54.346042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.800 [2024-11-20 08:30:54.346258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.800 [2024-11-20 08:30:54.346473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.801 [2024-11-20 08:30:54.346481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.801 [2024-11-20 08:30:54.346489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.801 [2024-11-20 08:30:54.346496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.801 [2024-11-20 08:30:54.359257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.801 [2024-11-20 08:30:54.359673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.801 [2024-11-20 08:30:54.359689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.801 [2024-11-20 08:30:54.359696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.801 [2024-11-20 08:30:54.359918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.801 [2024-11-20 08:30:54.360134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.801 [2024-11-20 08:30:54.360141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.801 [2024-11-20 08:30:54.360149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.801 [2024-11-20 08:30:54.360155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.801 [2024-11-20 08:30:54.373159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.801 [2024-11-20 08:30:54.373582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.801 [2024-11-20 08:30:54.373600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.801 [2024-11-20 08:30:54.373608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.801 [2024-11-20 08:30:54.373824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.801 [2024-11-20 08:30:54.374054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.801 [2024-11-20 08:30:54.374065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.801 [2024-11-20 08:30:54.374072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.801 [2024-11-20 08:30:54.374079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.801 [2024-11-20 08:30:54.387045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.801 [2024-11-20 08:30:54.387616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.801 [2024-11-20 08:30:54.387632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.801 [2024-11-20 08:30:54.387640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.801 [2024-11-20 08:30:54.387859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.801 [2024-11-20 08:30:54.388082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.801 [2024-11-20 08:30:54.388090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.801 [2024-11-20 08:30:54.388097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.801 [2024-11-20 08:30:54.388104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.801 [2024-11-20 08:30:54.400858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.801 [2024-11-20 08:30:54.401378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.801 [2024-11-20 08:30:54.401395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.801 [2024-11-20 08:30:54.401402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.801 [2024-11-20 08:30:54.401617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.801 [2024-11-20 08:30:54.401832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.801 [2024-11-20 08:30:54.401841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.801 [2024-11-20 08:30:54.401848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.801 [2024-11-20 08:30:54.401854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.801 [2024-11-20 08:30:54.414611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.801 [2024-11-20 08:30:54.415121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.801 [2024-11-20 08:30:54.415137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.801 [2024-11-20 08:30:54.415145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.801 [2024-11-20 08:30:54.415361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.801 [2024-11-20 08:30:54.415575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.801 [2024-11-20 08:30:54.415584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.801 [2024-11-20 08:30:54.415591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.801 [2024-11-20 08:30:54.415598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.801 [2024-11-20 08:30:54.428364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.801 [2024-11-20 08:30:54.428989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.801 [2024-11-20 08:30:54.429027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.801 [2024-11-20 08:30:54.429040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.801 [2024-11-20 08:30:54.429276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.801 [2024-11-20 08:30:54.429506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.801 [2024-11-20 08:30:54.429517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.801 [2024-11-20 08:30:54.429529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.801 [2024-11-20 08:30:54.429537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.801 [2024-11-20 08:30:54.442099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.801 [2024-11-20 08:30:54.442666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.801 [2024-11-20 08:30:54.442685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.801 [2024-11-20 08:30:54.442693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.801 [2024-11-20 08:30:54.442914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.801 [2024-11-20 08:30:54.443132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.801 [2024-11-20 08:30:54.443141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.801 [2024-11-20 08:30:54.443148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.801 [2024-11-20 08:30:54.443155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.801 [2024-11-20 08:30:54.455901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.801 [2024-11-20 08:30:54.456431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.801 [2024-11-20 08:30:54.456447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.801 [2024-11-20 08:30:54.456455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.801 [2024-11-20 08:30:54.456670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.801 [2024-11-20 08:30:54.456891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.801 [2024-11-20 08:30:54.456901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.801 [2024-11-20 08:30:54.456908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.801 [2024-11-20 08:30:54.456914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.801 [2024-11-20 08:30:54.469652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.801 [2024-11-20 08:30:54.470222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.801 [2024-11-20 08:30:54.470239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.801 [2024-11-20 08:30:54.470246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.801 [2024-11-20 08:30:54.470461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.801 [2024-11-20 08:30:54.470676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.801 [2024-11-20 08:30:54.470685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.801 [2024-11-20 08:30:54.470692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.801 [2024-11-20 08:30:54.470698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.801 [2024-11-20 08:30:54.483445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.801 [2024-11-20 08:30:54.483980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.801 [2024-11-20 08:30:54.483997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.801 [2024-11-20 08:30:54.484004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.801 [2024-11-20 08:30:54.484219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.802 [2024-11-20 08:30:54.484434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.802 [2024-11-20 08:30:54.484443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.802 [2024-11-20 08:30:54.484451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.802 [2024-11-20 08:30:54.484457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.802 [2024-11-20 08:30:54.497196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.802 [2024-11-20 08:30:54.497817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.802 [2024-11-20 08:30:54.497855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.802 [2024-11-20 08:30:54.497873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.802 [2024-11-20 08:30:54.498109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.802 [2024-11-20 08:30:54.498329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.802 [2024-11-20 08:30:54.498337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.802 [2024-11-20 08:30:54.498345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.802 [2024-11-20 08:30:54.498353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.802 [2024-11-20 08:30:54.511106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:49.802 [2024-11-20 08:30:54.511682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.802 [2024-11-20 08:30:54.511701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:49.802 [2024-11-20 08:30:54.511709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:49.802 [2024-11-20 08:30:54.511931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:49.802 [2024-11-20 08:30:54.512148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:49.802 [2024-11-20 08:30:54.512157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:49.802 [2024-11-20 08:30:54.512164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:49.802 [2024-11-20 08:30:54.512171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:49.802 [2024-11-20 08:30:54.524943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.064 [2024-11-20 08:30:54.525552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.064 [2024-11-20 08:30:54.525595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.064 [2024-11-20 08:30:54.525606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.064 [2024-11-20 08:30:54.525840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.064 [2024-11-20 08:30:54.526070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.064 [2024-11-20 08:30:54.526080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.064 [2024-11-20 08:30:54.526088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.064 [2024-11-20 08:30:54.526097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.064 [2024-11-20 08:30:54.538876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.064 [2024-11-20 08:30:54.539465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.064 [2024-11-20 08:30:54.539503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.064 [2024-11-20 08:30:54.539516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.064 [2024-11-20 08:30:54.539753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.064 [2024-11-20 08:30:54.539981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.064 [2024-11-20 08:30:54.539991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.064 [2024-11-20 08:30:54.539999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.064 [2024-11-20 08:30:54.540007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.064 [2024-11-20 08:30:54.552759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.064 [2024-11-20 08:30:54.553398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.064 [2024-11-20 08:30:54.553436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.064 [2024-11-20 08:30:54.553447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.064 [2024-11-20 08:30:54.553682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.064 [2024-11-20 08:30:54.553910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.064 [2024-11-20 08:30:54.553920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.064 [2024-11-20 08:30:54.553927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.064 [2024-11-20 08:30:54.553935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.064 [2024-11-20 08:30:54.566681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.064 [2024-11-20 08:30:54.567389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.064 [2024-11-20 08:30:54.567427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.064 [2024-11-20 08:30:54.567438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.064 [2024-11-20 08:30:54.567678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.064 [2024-11-20 08:30:54.567905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.064 [2024-11-20 08:30:54.567915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.064 [2024-11-20 08:30:54.567923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.064 [2024-11-20 08:30:54.567931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.064 [2024-11-20 08:30:54.580475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.064 [2024-11-20 08:30:54.581152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.064 [2024-11-20 08:30:54.581190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.064 [2024-11-20 08:30:54.581203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.064 [2024-11-20 08:30:54.581438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.064 [2024-11-20 08:30:54.581658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.064 [2024-11-20 08:30:54.581667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.064 [2024-11-20 08:30:54.581675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.064 [2024-11-20 08:30:54.581682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2178762 Killed "${NVMF_APP[@]}" "$@" 00:33:50.064 08:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:50.064 08:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:50.064 08:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:50.064 08:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:50.064 08:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.064 08:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=2180405 00:33:50.064 08:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 2180405 00:33:50.064 08:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:50.064 08:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2180405 ']' 00:33:50.064 08:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.064 08:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:50.064 [2024-11-20 08:30:54.594245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.064 08:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.064 08:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:50.064 [2024-11-20 08:30:54.594716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.065 [2024-11-20 08:30:54.594736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.065 [2024-11-20 08:30:54.594744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.065 08:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.065 [2024-11-20 08:30:54.594970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.065 [2024-11-20 08:30:54.595188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.065 [2024-11-20 08:30:54.595196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.065 [2024-11-20 08:30:54.595204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.065 [2024-11-20 08:30:54.595211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.065 [2024-11-20 08:30:54.608168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.065 [2024-11-20 08:30:54.608834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.065 [2024-11-20 08:30:54.608880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.065 [2024-11-20 08:30:54.608894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.065 [2024-11-20 08:30:54.609131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.065 [2024-11-20 08:30:54.609351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.065 [2024-11-20 08:30:54.609360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.065 [2024-11-20 08:30:54.609369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.065 [2024-11-20 08:30:54.609377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.065 [2024-11-20 08:30:54.621934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.065 [2024-11-20 08:30:54.622379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.065 [2024-11-20 08:30:54.622400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.065 [2024-11-20 08:30:54.622408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.065 [2024-11-20 08:30:54.622625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.065 [2024-11-20 08:30:54.622842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.065 [2024-11-20 08:30:54.622850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.065 [2024-11-20 08:30:54.622857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.065 [2024-11-20 08:30:54.622871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.065 [2024-11-20 08:30:54.635851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.065 [2024-11-20 08:30:54.636473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.065 [2024-11-20 08:30:54.636511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.065 [2024-11-20 08:30:54.636522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.065 [2024-11-20 08:30:54.636758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.065 [2024-11-20 08:30:54.636985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.065 [2024-11-20 08:30:54.637007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.065 [2024-11-20 08:30:54.637015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.065 [2024-11-20 08:30:54.637023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.065 [2024-11-20 08:30:54.645835] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:33:50.065 [2024-11-20 08:30:54.645887] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:50.065 [2024-11-20 08:30:54.649781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.065 [2024-11-20 08:30:54.650378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.065 [2024-11-20 08:30:54.650398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.065 [2024-11-20 08:30:54.650406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.065 [2024-11-20 08:30:54.650622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.065 [2024-11-20 08:30:54.650838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.065 [2024-11-20 08:30:54.650847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.065 [2024-11-20 08:30:54.650854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.065 [2024-11-20 08:30:54.650867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.065 [2024-11-20 08:30:54.663611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.065 [2024-11-20 08:30:54.664049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.065 [2024-11-20 08:30:54.664066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.065 [2024-11-20 08:30:54.664074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.065 [2024-11-20 08:30:54.664290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.065 [2024-11-20 08:30:54.664505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.065 [2024-11-20 08:30:54.664514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.065 [2024-11-20 08:30:54.664522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.065 [2024-11-20 08:30:54.664528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.065 [2024-11-20 08:30:54.677479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.065 [2024-11-20 08:30:54.677874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.065 [2024-11-20 08:30:54.677890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.065 [2024-11-20 08:30:54.677898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.065 [2024-11-20 08:30:54.678113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.065 [2024-11-20 08:30:54.678333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.065 [2024-11-20 08:30:54.678342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.065 [2024-11-20 08:30:54.678350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.065 [2024-11-20 08:30:54.678357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.065 [2024-11-20 08:30:54.691254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.065 [2024-11-20 08:30:54.691801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.065 [2024-11-20 08:30:54.691819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.065 [2024-11-20 08:30:54.691827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.065 [2024-11-20 08:30:54.692048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.065 [2024-11-20 08:30:54.692265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.065 [2024-11-20 08:30:54.692274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.065 [2024-11-20 08:30:54.692282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.065 [2024-11-20 08:30:54.692289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.065 [2024-11-20 08:30:54.705034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.065 [2024-11-20 08:30:54.705568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.065 [2024-11-20 08:30:54.705584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.065 [2024-11-20 08:30:54.705591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.065 [2024-11-20 08:30:54.705807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.065 [2024-11-20 08:30:54.706027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.065 [2024-11-20 08:30:54.706036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.065 [2024-11-20 08:30:54.706043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.065 [2024-11-20 08:30:54.706049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.065 [2024-11-20 08:30:54.718788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.065 [2024-11-20 08:30:54.719320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.065 [2024-11-20 08:30:54.719337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.065 [2024-11-20 08:30:54.719345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.065 [2024-11-20 08:30:54.719560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.065 [2024-11-20 08:30:54.719776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.066 [2024-11-20 08:30:54.719784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.066 [2024-11-20 08:30:54.719795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.066 [2024-11-20 08:30:54.719802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.066 [2024-11-20 08:30:54.732617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.066 [2024-11-20 08:30:54.733182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.066 [2024-11-20 08:30:54.733200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.066 [2024-11-20 08:30:54.733207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.066 [2024-11-20 08:30:54.733423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.066 [2024-11-20 08:30:54.733639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.066 [2024-11-20 08:30:54.733647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.066 [2024-11-20 08:30:54.733654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.066 [2024-11-20 08:30:54.733661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.066 [2024-11-20 08:30:54.743584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:50.066 [2024-11-20 08:30:54.746412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.066 [2024-11-20 08:30:54.747014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.066 [2024-11-20 08:30:54.747053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.066 [2024-11-20 08:30:54.747066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.066 [2024-11-20 08:30:54.747306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.066 [2024-11-20 08:30:54.747526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.066 [2024-11-20 08:30:54.747535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.066 [2024-11-20 08:30:54.747543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.066 [2024-11-20 08:30:54.747551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.066 [2024-11-20 08:30:54.760320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.066 [2024-11-20 08:30:54.760969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.066 [2024-11-20 08:30:54.761007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.066 [2024-11-20 08:30:54.761020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.066 [2024-11-20 08:30:54.761257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.066 [2024-11-20 08:30:54.761477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.066 [2024-11-20 08:30:54.761486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.066 [2024-11-20 08:30:54.761494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.066 [2024-11-20 08:30:54.761502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.066 [2024-11-20 08:30:54.772569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:50.066 [2024-11-20 08:30:54.772590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:50.066 [2024-11-20 08:30:54.772597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:50.066 [2024-11-20 08:30:54.772603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:50.066 [2024-11-20 08:30:54.772607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:50.066 [2024-11-20 08:30:54.773695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:50.066 [2024-11-20 08:30:54.773853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.066 [2024-11-20 08:30:54.773856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:50.066 [2024-11-20 08:30:54.774063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.066 [2024-11-20 08:30:54.774750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.066 [2024-11-20 08:30:54.774788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.066 [2024-11-20 08:30:54.774801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.066 [2024-11-20 08:30:54.775046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.066 [2024-11-20 08:30:54.775267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.066 [2024-11-20 08:30:54.775277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.066 [2024-11-20 08:30:54.775285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.066 [2024-11-20 08:30:54.775293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.066 [2024-11-20 08:30:54.787847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.066 [2024-11-20 08:30:54.788541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.066 [2024-11-20 08:30:54.788580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.066 [2024-11-20 08:30:54.788592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.066 [2024-11-20 08:30:54.788827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.328 [2024-11-20 08:30:54.789056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.328 [2024-11-20 08:30:54.789068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.328 [2024-11-20 08:30:54.789077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.328 [2024-11-20 08:30:54.789085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.328 [2024-11-20 08:30:54.801630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.328 [2024-11-20 08:30:54.802320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.328 [2024-11-20 08:30:54.802359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.328 [2024-11-20 08:30:54.802370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.328 [2024-11-20 08:30:54.802605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.328 [2024-11-20 08:30:54.803046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.328 [2024-11-20 08:30:54.803059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.328 [2024-11-20 08:30:54.803068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.328 [2024-11-20 08:30:54.803076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.328 [2024-11-20 08:30:54.815424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.328 [2024-11-20 08:30:54.816179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.329 [2024-11-20 08:30:54.816218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.329 [2024-11-20 08:30:54.816229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.329 [2024-11-20 08:30:54.816465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.329 [2024-11-20 08:30:54.816684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.329 [2024-11-20 08:30:54.816694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.329 [2024-11-20 08:30:54.816702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.329 [2024-11-20 08:30:54.816710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.329 [2024-11-20 08:30:54.829263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.329 [2024-11-20 08:30:54.829700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.329 [2024-11-20 08:30:54.829721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.329 [2024-11-20 08:30:54.829730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.329 [2024-11-20 08:30:54.829965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.329 [2024-11-20 08:30:54.830182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.329 [2024-11-20 08:30:54.830190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.329 [2024-11-20 08:30:54.830198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.329 [2024-11-20 08:30:54.830205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.329 [2024-11-20 08:30:54.843171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.329 [2024-11-20 08:30:54.843777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.329 [2024-11-20 08:30:54.843794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.329 [2024-11-20 08:30:54.843802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.329 [2024-11-20 08:30:54.844024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.329 [2024-11-20 08:30:54.844240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.329 [2024-11-20 08:30:54.844248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.329 [2024-11-20 08:30:54.844261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.329 [2024-11-20 08:30:54.844268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.329 [2024-11-20 08:30:54.857013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.329 [2024-11-20 08:30:54.857554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.329 [2024-11-20 08:30:54.857570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.329 [2024-11-20 08:30:54.857578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.329 [2024-11-20 08:30:54.857794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.329 [2024-11-20 08:30:54.858015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.329 [2024-11-20 08:30:54.858025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.329 [2024-11-20 08:30:54.858033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.329 [2024-11-20 08:30:54.858040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.329 [2024-11-20 08:30:54.870779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.329 [2024-11-20 08:30:54.871324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.329 [2024-11-20 08:30:54.871340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.329 [2024-11-20 08:30:54.871348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.329 [2024-11-20 08:30:54.871563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.329 [2024-11-20 08:30:54.871778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.329 [2024-11-20 08:30:54.871786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.329 [2024-11-20 08:30:54.871794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.329 [2024-11-20 08:30:54.871801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.329 [2024-11-20 08:30:54.884595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.329 [2024-11-20 08:30:54.885117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.329 [2024-11-20 08:30:54.885134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.329 [2024-11-20 08:30:54.885142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.329 [2024-11-20 08:30:54.885357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.329 [2024-11-20 08:30:54.885572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.329 [2024-11-20 08:30:54.885580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.329 [2024-11-20 08:30:54.885587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.329 [2024-11-20 08:30:54.885594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.329 [2024-11-20 08:30:54.898339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.329 [2024-11-20 08:30:54.898877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.329 [2024-11-20 08:30:54.898894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.329 [2024-11-20 08:30:54.898902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.329 [2024-11-20 08:30:54.899117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.329 [2024-11-20 08:30:54.899333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.329 [2024-11-20 08:30:54.899341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.329 [2024-11-20 08:30:54.899348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.329 [2024-11-20 08:30:54.899355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.329 [2024-11-20 08:30:54.912090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.329 [2024-11-20 08:30:54.912687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.329 [2024-11-20 08:30:54.912703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.329 [2024-11-20 08:30:54.912710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.329 [2024-11-20 08:30:54.912930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.329 [2024-11-20 08:30:54.913146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.329 [2024-11-20 08:30:54.913155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.329 [2024-11-20 08:30:54.913162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.329 [2024-11-20 08:30:54.913168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.329 [2024-11-20 08:30:54.925907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.329 [2024-11-20 08:30:54.926538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.329 [2024-11-20 08:30:54.926576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.329 [2024-11-20 08:30:54.926588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.329 [2024-11-20 08:30:54.926822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.329 [2024-11-20 08:30:54.927051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.329 [2024-11-20 08:30:54.927061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.329 [2024-11-20 08:30:54.927069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.329 [2024-11-20 08:30:54.927077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.329 [2024-11-20 08:30:54.939690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.329 [2024-11-20 08:30:54.940021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.329 [2024-11-20 08:30:54.940047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.329 [2024-11-20 08:30:54.940062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.329 [2024-11-20 08:30:54.940283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.329 [2024-11-20 08:30:54.940501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.329 [2024-11-20 08:30:54.940509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.329 [2024-11-20 08:30:54.940517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.330 [2024-11-20 08:30:54.940524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.330 [2024-11-20 08:30:54.953484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.330 [2024-11-20 08:30:54.954207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.330 [2024-11-20 08:30:54.954245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.330 [2024-11-20 08:30:54.954257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.330 [2024-11-20 08:30:54.954493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.330 [2024-11-20 08:30:54.954712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.330 [2024-11-20 08:30:54.954721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.330 [2024-11-20 08:30:54.954729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.330 [2024-11-20 08:30:54.954738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.330 [2024-11-20 08:30:54.967288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.330 [2024-11-20 08:30:54.967877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.330 [2024-11-20 08:30:54.967897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.330 [2024-11-20 08:30:54.967905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.330 [2024-11-20 08:30:54.968121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.330 [2024-11-20 08:30:54.968338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.330 [2024-11-20 08:30:54.968346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.330 [2024-11-20 08:30:54.968353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.330 [2024-11-20 08:30:54.968360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.330 [2024-11-20 08:30:54.981101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.330 [2024-11-20 08:30:54.981393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.330 [2024-11-20 08:30:54.981416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.330 [2024-11-20 08:30:54.981424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.330 [2024-11-20 08:30:54.981644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.330 [2024-11-20 08:30:54.981875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.330 [2024-11-20 08:30:54.981884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.330 [2024-11-20 08:30:54.981891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.330 [2024-11-20 08:30:54.981898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.330 [2024-11-20 08:30:54.994841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.330 [2024-11-20 08:30:54.995500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.330 [2024-11-20 08:30:54.995538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.330 [2024-11-20 08:30:54.995550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.330 [2024-11-20 08:30:54.995784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.330 [2024-11-20 08:30:54.996012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.330 [2024-11-20 08:30:54.996021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.330 [2024-11-20 08:30:54.996029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.330 [2024-11-20 08:30:54.996037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.330 [2024-11-20 08:30:55.008583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.330 [2024-11-20 08:30:55.009267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.330 [2024-11-20 08:30:55.009305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.330 [2024-11-20 08:30:55.009316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.330 [2024-11-20 08:30:55.009551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.330 [2024-11-20 08:30:55.009771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.330 [2024-11-20 08:30:55.009780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.330 [2024-11-20 08:30:55.009788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.330 [2024-11-20 08:30:55.009796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.330 4813.67 IOPS, 18.80 MiB/s [2024-11-20T07:30:55.059Z] [2024-11-20 08:30:55.024002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.330 [2024-11-20 08:30:55.024558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.330 [2024-11-20 08:30:55.024577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.330 [2024-11-20 08:30:55.024585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.330 [2024-11-20 08:30:55.024801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.330 [2024-11-20 08:30:55.025023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.330 [2024-11-20 08:30:55.025033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.330 [2024-11-20 08:30:55.025045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.330 [2024-11-20 08:30:55.025052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.330 [2024-11-20 08:30:55.037812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.330 [2024-11-20 08:30:55.038489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.330 [2024-11-20 08:30:55.038527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.330 [2024-11-20 08:30:55.038538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.330 [2024-11-20 08:30:55.038773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.330 [2024-11-20 08:30:55.038999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.330 [2024-11-20 08:30:55.039009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.330 [2024-11-20 08:30:55.039018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.330 [2024-11-20 08:30:55.039026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.330 [2024-11-20 08:30:55.051569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.330 [2024-11-20 08:30:55.052153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.330 [2024-11-20 08:30:55.052173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.330 [2024-11-20 08:30:55.052181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.330 [2024-11-20 08:30:55.052397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.330 [2024-11-20 08:30:55.052613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.330 [2024-11-20 08:30:55.052621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.330 [2024-11-20 08:30:55.052628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.330 [2024-11-20 08:30:55.052635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.592 [2024-11-20 08:30:55.065380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.592 [2024-11-20 08:30:55.065905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.592 [2024-11-20 08:30:55.065943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.592 [2024-11-20 08:30:55.065955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.592 [2024-11-20 08:30:55.066191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.592 [2024-11-20 08:30:55.066411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.592 [2024-11-20 08:30:55.066420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.592 [2024-11-20 08:30:55.066428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.592 [2024-11-20 08:30:55.066436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.592 [2024-11-20 08:30:55.079194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.592 [2024-11-20 08:30:55.079853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.592 [2024-11-20 08:30:55.079899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.592 [2024-11-20 08:30:55.079911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.592 [2024-11-20 08:30:55.080148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.592 [2024-11-20 08:30:55.080368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.592 [2024-11-20 08:30:55.080377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.592 [2024-11-20 08:30:55.080385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.592 [2024-11-20 08:30:55.080392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.592 [2024-11-20 08:30:55.092935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.592 [2024-11-20 08:30:55.093619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.592 [2024-11-20 08:30:55.093657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.592 [2024-11-20 08:30:55.093668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.592 [2024-11-20 08:30:55.093911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.592 [2024-11-20 08:30:55.094131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.593 [2024-11-20 08:30:55.094140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.593 [2024-11-20 08:30:55.094149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.593 [2024-11-20 08:30:55.094157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.593 [2024-11-20 08:30:55.106695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.593 [2024-11-20 08:30:55.107356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.593 [2024-11-20 08:30:55.107394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.593 [2024-11-20 08:30:55.107405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.593 [2024-11-20 08:30:55.107640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.593 [2024-11-20 08:30:55.107859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.593 [2024-11-20 08:30:55.107876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.593 [2024-11-20 08:30:55.107884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.593 [2024-11-20 08:30:55.107893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.593 [2024-11-20 08:30:55.120439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.593 [2024-11-20 08:30:55.120999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.593 [2024-11-20 08:30:55.121038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.593 [2024-11-20 08:30:55.121055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.593 [2024-11-20 08:30:55.121293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.593 [2024-11-20 08:30:55.121512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.593 [2024-11-20 08:30:55.121521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.593 [2024-11-20 08:30:55.121529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.593 [2024-11-20 08:30:55.121537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.593 [2024-11-20 08:30:55.134360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.593 [2024-11-20 08:30:55.135065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.593 [2024-11-20 08:30:55.135103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.593 [2024-11-20 08:30:55.135114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.593 [2024-11-20 08:30:55.135349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.593 [2024-11-20 08:30:55.135568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.593 [2024-11-20 08:30:55.135577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.593 [2024-11-20 08:30:55.135585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.593 [2024-11-20 08:30:55.135593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.593 [2024-11-20 08:30:55.148175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.593 [2024-11-20 08:30:55.148872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.593 [2024-11-20 08:30:55.148910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.593 [2024-11-20 08:30:55.148921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.593 [2024-11-20 08:30:55.149156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.593 [2024-11-20 08:30:55.149375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.593 [2024-11-20 08:30:55.149384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.593 [2024-11-20 08:30:55.149392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.593 [2024-11-20 08:30:55.149400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.593 [2024-11-20 08:30:55.161940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.593 [2024-11-20 08:30:55.162496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.593 [2024-11-20 08:30:55.162516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.593 [2024-11-20 08:30:55.162524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.593 [2024-11-20 08:30:55.162740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.593 [2024-11-20 08:30:55.162967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.593 [2024-11-20 08:30:55.162977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.593 [2024-11-20 08:30:55.162984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.593 [2024-11-20 08:30:55.162992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.593 [2024-11-20 08:30:55.175727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.593 [2024-11-20 08:30:55.176257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.593 [2024-11-20 08:30:55.176296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.593 [2024-11-20 08:30:55.176307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.593 [2024-11-20 08:30:55.176542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.593 [2024-11-20 08:30:55.176761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.593 [2024-11-20 08:30:55.176770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.593 [2024-11-20 08:30:55.176777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.593 [2024-11-20 08:30:55.176785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.593 [2024-11-20 08:30:55.189536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.593 [2024-11-20 08:30:55.190216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.593 [2024-11-20 08:30:55.190254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.593 [2024-11-20 08:30:55.190265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.593 [2024-11-20 08:30:55.190500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.593 [2024-11-20 08:30:55.190720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.593 [2024-11-20 08:30:55.190729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.593 [2024-11-20 08:30:55.190737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.593 [2024-11-20 08:30:55.190746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.593 [2024-11-20 08:30:55.203297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.593 [2024-11-20 08:30:55.203767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.593 [2024-11-20 08:30:55.203786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.593 [2024-11-20 08:30:55.203794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.593 [2024-11-20 08:30:55.204016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.593 [2024-11-20 08:30:55.204232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.593 [2024-11-20 08:30:55.204241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.593 [2024-11-20 08:30:55.204253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.593 [2024-11-20 08:30:55.204260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.593 [2024-11-20 08:30:55.217208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.593 [2024-11-20 08:30:55.217753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.593 [2024-11-20 08:30:55.217770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.593 [2024-11-20 08:30:55.217777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.593 [2024-11-20 08:30:55.217998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.593 [2024-11-20 08:30:55.218215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.593 [2024-11-20 08:30:55.218223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.593 [2024-11-20 08:30:55.218231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.593 [2024-11-20 08:30:55.218237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.593 [2024-11-20 08:30:55.230982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.593 [2024-11-20 08:30:55.231523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.593 [2024-11-20 08:30:55.231539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.593 [2024-11-20 08:30:55.231547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.593 [2024-11-20 08:30:55.231762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.594 [2024-11-20 08:30:55.231992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.594 [2024-11-20 08:30:55.232003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.594 [2024-11-20 08:30:55.232010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.594 [2024-11-20 08:30:55.232017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.594 [2024-11-20 08:30:55.244753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.594 [2024-11-20 08:30:55.245166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.594 [2024-11-20 08:30:55.245183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.594 [2024-11-20 08:30:55.245190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.594 [2024-11-20 08:30:55.245406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.594 [2024-11-20 08:30:55.245621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.594 [2024-11-20 08:30:55.245630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.594 [2024-11-20 08:30:55.245638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.594 [2024-11-20 08:30:55.245646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.594 [2024-11-20 08:30:55.258595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.594 [2024-11-20 08:30:55.259272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.594 [2024-11-20 08:30:55.259311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.594 [2024-11-20 08:30:55.259322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.594 [2024-11-20 08:30:55.259558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.594 [2024-11-20 08:30:55.259777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.594 [2024-11-20 08:30:55.259786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.594 [2024-11-20 08:30:55.259793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.594 [2024-11-20 08:30:55.259801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.594 [2024-11-20 08:30:55.272353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.594 [2024-11-20 08:30:55.272965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.594 [2024-11-20 08:30:55.273003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.594 [2024-11-20 08:30:55.273016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.594 [2024-11-20 08:30:55.273254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.594 [2024-11-20 08:30:55.273474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.594 [2024-11-20 08:30:55.273492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.594 [2024-11-20 08:30:55.273500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.594 [2024-11-20 08:30:55.273508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.594 [2024-11-20 08:30:55.286266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.594 [2024-11-20 08:30:55.286918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.594 [2024-11-20 08:30:55.286955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.594 [2024-11-20 08:30:55.286968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.594 [2024-11-20 08:30:55.287206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.594 [2024-11-20 08:30:55.287425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.594 [2024-11-20 08:30:55.287442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.594 [2024-11-20 08:30:55.287450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.594 [2024-11-20 08:30:55.287458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.594 [2024-11-20 08:30:55.300016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.594 [2024-11-20 08:30:55.300669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.594 [2024-11-20 08:30:55.300707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.594 [2024-11-20 08:30:55.300723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.594 [2024-11-20 08:30:55.300965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.594 [2024-11-20 08:30:55.301186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.594 [2024-11-20 08:30:55.301196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.594 [2024-11-20 08:30:55.301204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.594 [2024-11-20 08:30:55.301212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.594 [2024-11-20 08:30:55.313750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.594 [2024-11-20 08:30:55.314148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.594 [2024-11-20 08:30:55.314167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.594 [2024-11-20 08:30:55.314175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.594 [2024-11-20 08:30:55.314391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.594 [2024-11-20 08:30:55.314607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.594 [2024-11-20 08:30:55.314616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.594 [2024-11-20 08:30:55.314623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.594 [2024-11-20 08:30:55.314630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.856 [2024-11-20 08:30:55.327580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.856 [2024-11-20 08:30:55.327908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.856 [2024-11-20 08:30:55.327926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.856 [2024-11-20 08:30:55.327934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.856 [2024-11-20 08:30:55.328149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.856 [2024-11-20 08:30:55.328365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.856 [2024-11-20 08:30:55.328373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.856 [2024-11-20 08:30:55.328380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.856 [2024-11-20 08:30:55.328387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.856 [2024-11-20 08:30:55.341357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.856 [2024-11-20 08:30:55.341729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.856 [2024-11-20 08:30:55.341746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.856 [2024-11-20 08:30:55.341754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.856 [2024-11-20 08:30:55.341974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.856 [2024-11-20 08:30:55.342196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.857 [2024-11-20 08:30:55.342204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.857 [2024-11-20 08:30:55.342212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.857 [2024-11-20 08:30:55.342218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.857 [2024-11-20 08:30:55.355191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.857 [2024-11-20 08:30:55.355735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.857 [2024-11-20 08:30:55.355752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.857 [2024-11-20 08:30:55.355760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.857 [2024-11-20 08:30:55.355982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.857 [2024-11-20 08:30:55.356199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.857 [2024-11-20 08:30:55.356208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.857 [2024-11-20 08:30:55.356215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.857 [2024-11-20 08:30:55.356223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.857 [2024-11-20 08:30:55.368961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.857 [2024-11-20 08:30:55.369606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.857 [2024-11-20 08:30:55.369643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.857 [2024-11-20 08:30:55.369655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.857 [2024-11-20 08:30:55.369897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.857 [2024-11-20 08:30:55.370118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.857 [2024-11-20 08:30:55.370127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.857 [2024-11-20 08:30:55.370135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.857 [2024-11-20 08:30:55.370144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.857 [2024-11-20 08:30:55.382686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.857 [2024-11-20 08:30:55.383375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.857 [2024-11-20 08:30:55.383413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.857 [2024-11-20 08:30:55.383425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.857 [2024-11-20 08:30:55.383661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.857 [2024-11-20 08:30:55.383889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.857 [2024-11-20 08:30:55.383899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.857 [2024-11-20 08:30:55.383912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.857 [2024-11-20 08:30:55.383920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.857 [2024-11-20 08:30:55.396468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.857 [2024-11-20 08:30:55.397130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.857 [2024-11-20 08:30:55.397169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.857 [2024-11-20 08:30:55.397180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.857 [2024-11-20 08:30:55.397415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.857 [2024-11-20 08:30:55.397634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.857 [2024-11-20 08:30:55.397643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.857 [2024-11-20 08:30:55.397650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.857 [2024-11-20 08:30:55.397658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.857 [2024-11-20 08:30:55.410208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.857 [2024-11-20 08:30:55.410858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.857 [2024-11-20 08:30:55.410902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.857 [2024-11-20 08:30:55.410914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.857 [2024-11-20 08:30:55.411148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.857 [2024-11-20 08:30:55.411367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.857 [2024-11-20 08:30:55.411376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.857 [2024-11-20 08:30:55.411384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.857 [2024-11-20 08:30:55.411392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.857 [2024-11-20 08:30:55.424143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.857 [2024-11-20 08:30:55.424707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.857 [2024-11-20 08:30:55.424744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.857 [2024-11-20 08:30:55.424756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.857 [2024-11-20 08:30:55.425003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.857 [2024-11-20 08:30:55.425225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.857 [2024-11-20 08:30:55.425235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.857 [2024-11-20 08:30:55.425243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.857 [2024-11-20 08:30:55.425251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.857 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:50.857 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:33:50.857 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:50.857 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:50.857 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.857 [2024-11-20 08:30:55.438032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.857 [2024-11-20 08:30:55.438674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.857 [2024-11-20 08:30:55.438712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.857 [2024-11-20 08:30:55.438723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.857 [2024-11-20 08:30:55.438966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.857 [2024-11-20 08:30:55.439186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.857 [2024-11-20 08:30:55.439196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.857 [2024-11-20 08:30:55.439204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.857 [2024-11-20 08:30:55.439212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.857 [2024-11-20 08:30:55.451963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.857 [2024-11-20 08:30:55.452404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.857 [2024-11-20 08:30:55.452423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.857 [2024-11-20 08:30:55.452432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.857 [2024-11-20 08:30:55.452648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.857 [2024-11-20 08:30:55.452871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.857 [2024-11-20 08:30:55.452880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.857 [2024-11-20 08:30:55.452888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.857 [2024-11-20 08:30:55.452895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.857 [2024-11-20 08:30:55.465849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.857 [2024-11-20 08:30:55.466357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.857 [2024-11-20 08:30:55.466395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.857 [2024-11-20 08:30:55.466407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.857 [2024-11-20 08:30:55.466642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.857 [2024-11-20 08:30:55.466871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.857 [2024-11-20 08:30:55.466881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.857 [2024-11-20 08:30:55.466889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.858 [2024-11-20 08:30:55.466902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.858 [2024-11-20 08:30:55.474147] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.858 [2024-11-20 08:30:55.479652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.858 [2024-11-20 08:30:55.480332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.858 [2024-11-20 08:30:55.480371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.858 [2024-11-20 08:30:55.480382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.858 [2024-11-20 08:30:55.480617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.858 [2024-11-20 08:30:55.480836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.858 [2024-11-20 08:30:55.480845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.858 [2024-11-20 08:30:55.480853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.858 [2024-11-20 08:30:55.480861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.858 [2024-11-20 08:30:55.493415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.858 [2024-11-20 08:30:55.494040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.858 [2024-11-20 08:30:55.494077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.858 [2024-11-20 08:30:55.494089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.858 [2024-11-20 08:30:55.494323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.858 [2024-11-20 08:30:55.494542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.858 [2024-11-20 08:30:55.494551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.858 [2024-11-20 08:30:55.494559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.858 [2024-11-20 08:30:55.494567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.858 [2024-11-20 08:30:55.507321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.858 Malloc0 00:33:50.858 [2024-11-20 08:30:55.508104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.858 [2024-11-20 08:30:55.508142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.858 [2024-11-20 08:30:55.508153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.858 [2024-11-20 08:30:55.508393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.858 [2024-11-20 08:30:55.508612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.858 [2024-11-20 08:30:55.508622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.858 [2024-11-20 08:30:55.508630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.858 [2024-11-20 08:30:55.508638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.858 [2024-11-20 08:30:55.521190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.858 [2024-11-20 08:30:55.521921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.858 [2024-11-20 08:30:55.521958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.858 [2024-11-20 08:30:55.521971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.858 [2024-11-20 08:30:55.522210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.858 [2024-11-20 08:30:55.522429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.858 [2024-11-20 08:30:55.522438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.858 [2024-11-20 08:30:55.522446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.858 [2024-11-20 08:30:55.522454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.858 [2024-11-20 08:30:55.535027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:50.858 [2024-11-20 08:30:55.535593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.858 [2024-11-20 08:30:55.535630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c446a0 with addr=10.0.0.2, port=4420 00:33:50.858 [2024-11-20 08:30:55.535641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c446a0 is same with the state(6) to be set 00:33:50.858 [2024-11-20 08:30:55.535882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c446a0 (9): Bad file descriptor 00:33:50.858 [2024-11-20 08:30:55.536103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:50.858 [2024-11-20 08:30:55.536117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:50.858 [2024-11-20 08:30:55.536125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:50.858 [2024-11-20 08:30:55.536133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:50.858 [2024-11-20 08:30:55.539476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.858 08:30:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2179140 00:33:50.858 [2024-11-20 08:30:55.548885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:51.119 [2024-11-20 08:30:55.709029] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:33:52.323 4651.57 IOPS, 18.17 MiB/s [2024-11-20T07:30:58.439Z] 5473.12 IOPS, 21.38 MiB/s [2024-11-20T07:30:59.381Z] 6109.89 IOPS, 23.87 MiB/s [2024-11-20T07:31:00.323Z] 6633.70 IOPS, 25.91 MiB/s [2024-11-20T07:31:01.265Z] 7046.64 IOPS, 27.53 MiB/s [2024-11-20T07:31:02.208Z] 7404.75 IOPS, 28.92 MiB/s [2024-11-20T07:31:03.152Z] 7703.23 IOPS, 30.09 MiB/s [2024-11-20T07:31:04.095Z] 7968.50 IOPS, 31.13 MiB/s [2024-11-20T07:31:04.095Z] 8192.00 IOPS, 32.00 MiB/s 00:33:59.367 Latency(us) 00:33:59.367 [2024-11-20T07:31:04.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.367 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:59.367 Verification LBA range: start 0x0 length 0x4000 00:33:59.367 Nvme1n1 : 15.01 8190.15 31.99 10255.28 0.00 6913.86 788.48 15947.09 00:33:59.367 [2024-11-20T07:31:04.096Z] =================================================================================================================== 00:33:59.367 [2024-11-20T07:31:04.096Z] Total : 8190.15 31.99 10255.28 0.00 6913.86 788.48 15947.09 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@99 -- # sync 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # set +e 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:59.628 rmmod nvme_tcp 00:33:59.628 rmmod nvme_fabrics 00:33:59.628 rmmod nvme_keyring 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # set -e 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # return 0 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # '[' -n 2180405 ']' 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@337 -- # killprocess 2180405 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2180405 ']' 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2180405 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2180405 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2180405' 00:33:59.628 killing process with pid 2180405 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2180405 00:33:59.628 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2180405 00:33:59.889 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:59.889 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # nvmf_fini 00:33:59.889 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@254 -- # local dev 00:33:59.889 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:59.889 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:59.889 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:59.889 08:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@121 -- # return 0 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # _dev=0 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # dev_map=() 00:34:01.803 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@274 -- # iptr 00:34:01.804 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@548 -- # iptables-save 00:34:01.804 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:34:01.804 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@548 -- # iptables-restore 00:34:01.804 00:34:01.804 real 0m29.241s 00:34:01.804 user 1m3.639s 00:34:01.804 sys 0m8.275s 00:34:01.804 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.804 08:31:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.804 ************************************ 00:34:01.804 END TEST nvmf_bdevperf 00:34:01.804 ************************************ 00:34:02.065 08:31:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:02.065 08:31:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:02.065 08:31:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:02.065 08:31:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.065 ************************************ 00:34:02.065 START TEST nvmf_target_disconnect 00:34:02.065 ************************************ 00:34:02.065 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:02.065 * Looking for test storage... 00:34:02.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:02.065 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:02.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.066 --rc genhtml_branch_coverage=1 00:34:02.066 --rc genhtml_function_coverage=1 00:34:02.066 --rc genhtml_legend=1 00:34:02.066 --rc geninfo_all_blocks=1 00:34:02.066 --rc geninfo_unexecuted_blocks=1 00:34:02.066 00:34:02.066 ' 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:02.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.066 --rc genhtml_branch_coverage=1 00:34:02.066 --rc genhtml_function_coverage=1 00:34:02.066 --rc genhtml_legend=1 00:34:02.066 --rc geninfo_all_blocks=1 00:34:02.066 --rc geninfo_unexecuted_blocks=1 00:34:02.066 00:34:02.066 ' 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:02.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.066 --rc genhtml_branch_coverage=1 00:34:02.066 --rc genhtml_function_coverage=1 00:34:02.066 --rc genhtml_legend=1 00:34:02.066 --rc geninfo_all_blocks=1 00:34:02.066 --rc geninfo_unexecuted_blocks=1 00:34:02.066 00:34:02.066 ' 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:02.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.066 --rc genhtml_branch_coverage=1 00:34:02.066 --rc genhtml_function_coverage=1 00:34:02.066 --rc genhtml_legend=1 00:34:02.066 --rc geninfo_all_blocks=1 00:34:02.066 --rc geninfo_unexecuted_blocks=1 00:34:02.066 00:34:02.066 ' 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@50 -- # : 0 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:34:02.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:02.066 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:02.067 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:34:02.328 08:31:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # e810=() 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # x722=() 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:10.478 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:10.478 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:10.478 Found net devices under 0000:31:00.0: cvl_0_0 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:10.478 Found net devices under 0000:31:00.1: cvl_0_1 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:34:10.478 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@247 -- # create_target_ns 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:34:10.479 08:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:34:10.479 10.0.0.1 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:34:10.479 10.0.0.2 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:34:10.479 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:34:10.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:10.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.483 ms 00:34:10.741 00:34:10.741 --- 10.0.0.1 ping statistics --- 00:34:10.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.741 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:34:10.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:10.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:34:10.741 00:34:10.741 --- 10.0.0.2 ping statistics --- 00:34:10.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.741 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair++ )) 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # return 0 00:34:10.741 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # return 1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev= 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@160 -- # return 0 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=target1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # return 1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev= 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@160 -- # return 0 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:10.742 ************************************ 00:34:10.742 START TEST nvmf_target_disconnect_tc1 00:34:10.742 ************************************ 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:10.742 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:11.004 [2024-11-20 08:31:15.569628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.004 [2024-11-20 08:31:15.569690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x562cf0 with addr=10.0.0.2, port=4420 00:34:11.004 [2024-11-20 08:31:15.569714] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:11.004 [2024-11-20 08:31:15.569724] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:11.004 [2024-11-20 08:31:15.569732] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:34:11.004 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:11.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:11.004 Initializing NVMe Controllers 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:11.004 00:34:11.004 real 0m0.127s 00:34:11.004 user 0m0.062s 00:34:11.004 sys 0m0.065s 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:11.004 ************************************ 00:34:11.004 END TEST nvmf_target_disconnect_tc1 00:34:11.004 ************************************ 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:11.004 ************************************ 00:34:11.004 START TEST nvmf_target_disconnect_tc2 00:34:11.004 ************************************ 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=2186932 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 2186932 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2186932 ']' 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.004 08:31:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:11.004 [2024-11-20 08:31:15.726431] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:34:11.004 [2024-11-20 08:31:15.726488] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.266 [2024-11-20 08:31:15.837333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:11.266 [2024-11-20 08:31:15.890701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.266 [2024-11-20 08:31:15.890761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.266 [2024-11-20 08:31:15.890770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:11.266 [2024-11-20 08:31:15.890777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:11.266 [2024-11-20 08:31:15.890784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.266 [2024-11-20 08:31:15.893285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:11.266 [2024-11-20 08:31:15.893444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:11.266 [2024-11-20 08:31:15.893626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:11.266 [2024-11-20 08:31:15.893627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:11.884 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.884 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:34:11.884 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:11.884 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:11.884 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:11.884 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:11.884 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.884 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.210 Malloc0 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.210 [2024-11-20 08:31:16.638698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.210 [2024-11-20 08:31:16.679153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2187264 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:12.210 08:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:14.131 08:31:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2186932 00:34:14.131 08:31:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:14.131 Read completed with error (sct=0, sc=8) 00:34:14.131 starting I/O failed 00:34:14.131 Read completed with error (sct=0, sc=8) 00:34:14.131 starting I/O failed 00:34:14.132 Read completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Read completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Read completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Read completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Read completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Read completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Read completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Read completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Read completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Read completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Read completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Read completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Read completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 Write completed with error (sct=0, sc=8) 00:34:14.132 starting I/O failed 00:34:14.132 [2024-11-20 08:31:18.714107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.132 [2024-11-20 08:31:18.714414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.714440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.714746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.714758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.715237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.715282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.715504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.715519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.715821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.715832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.716217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.716253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.716560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.716573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.717093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.717129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.717465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.717478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.717773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.717784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.717997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.718009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.718313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.718324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.718673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.718683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.719095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.719106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.719493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.719504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.719834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.719844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.720042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.720053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.720384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.720395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.720733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.720743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.721056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.721067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.721384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.721394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.721712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.721722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.722043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.722054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.722343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.722353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.722556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.722566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.722909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-11-20 08:31:18.722919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-11-20 08:31:18.723217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.723227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.723519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.723529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.723843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.723854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.724154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.724165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.724500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.724512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.724688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.724699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.725047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.725057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.725229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.725241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.725545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.725557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.725875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.725886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.726172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.726182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.726482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.726493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.726777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.726787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.727095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.727105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.727465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.727476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.727688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.727698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.727970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.727981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.728276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.728286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.728571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.728582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.728773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.728784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.729114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.729124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.729342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.729352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.729673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.729684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.730011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.730022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.730352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.730363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.730560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.730571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.730724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.730735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.730933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.730944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.731230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.731240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.731572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.731583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.731902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.731913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.732093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.732105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.732465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.732475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.732761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.732772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.733088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.733099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.733390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.733400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.733701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-11-20 08:31:18.733712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-11-20 08:31:18.734065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.734075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.734415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.734425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.734613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.734624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.734815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.734824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.735130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.735140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.735454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.735464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.735799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.735809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.735988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.736000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.736261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.736271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.736575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.736585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.736879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.736889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.737177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.737186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.737468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.737478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.737767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.737776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.738098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.738108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.738392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.738402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.738578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.738589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.738913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.738924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.739225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.739234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.739527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.739544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.739831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.739841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.740179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.740189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.740500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.740510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.740836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.740846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.741207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.741217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.741512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.741522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.741805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.741817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.742173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.742185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.742496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.742508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.742808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.742820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.743118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.743131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.743474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.743487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.743811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.743823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.744001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.744015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.744327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.744339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.744506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.744519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.744889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.744902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-11-20 08:31:18.745184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-11-20 08:31:18.745196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.745513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.745525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.745811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.745824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.746177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.746190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.746575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.746587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.746905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.746918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.747217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.747229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.747543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.747556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.747854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.747871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.748059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.748073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.748393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.748408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.748698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.748711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.749016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.749029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.749343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.749355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.749642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.749654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.750008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.750021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.750320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.750332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.750602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.750614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.750955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.750967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.751249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.751262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.751591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.751603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.751806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.751818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.752248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.752261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.752555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.752567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.752900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.752913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.753219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.753239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.753534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.753546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.753743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.753755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.754052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.754066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.754362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.754374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.754650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.754662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.755056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.755069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.755364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.755376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.755623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.755635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.755894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.755906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.756206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.756218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.756533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.756546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.756884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-11-20 08:31:18.756897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-11-20 08:31:18.757104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.757116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.757323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.757335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.757613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.757626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.757827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.757839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.758141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.758154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.758431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.758448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.758736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.758753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.759074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.759091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.759308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.759325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.759677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.759694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.759987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.760004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.760166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.760185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.760529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.760549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.760875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.760892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.761230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.761247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.761556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.761579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.761899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.761916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.762157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.762173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.762308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.762325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.762620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.762636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.762955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.762973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.763185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.763204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.763528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.763543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.763873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.763890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.764260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.764277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.764578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.764594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.764882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.764899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.765222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.765238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.765421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.765439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.765634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-11-20 08:31:18.765652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-11-20 08:31:18.765992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.766009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.766308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.766324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.766616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.766633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.766906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.766923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.767165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.767182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.767522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.767539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.767778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.767795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.768126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.768143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.768515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.768531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.768874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.768897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.769229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.769250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.769563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.769584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.769783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.769804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.770131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.770153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.770462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.770483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.770803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.770824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.771186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.771208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.771561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.771582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.771786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.771808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.772044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.772067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.772273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.772294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.772437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.772457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.772805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.772830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.773046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.773069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.773426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.773446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.773804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.773824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.774168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.774189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.774490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.774509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.774878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.774899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.775175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.775195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.775469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.775489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.775782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.775803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.776033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.776054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.776314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.776334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.776536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.776558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.776792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.776812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.776946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.776968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-11-20 08:31:18.777324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-11-20 08:31:18.777344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.777566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.777586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.777925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.777946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.778181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.778203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.778533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.778553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.778870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.778891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.779134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.779155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.779551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.779571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.779797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.779817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.780072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.780093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.780394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.780415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.780757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.780777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.781099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.781121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.781342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.781362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.781569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.781589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.781910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.781931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.782333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.782354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.782543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.782570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.782800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.782828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.783212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.783242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.783589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.783616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.783992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.784021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.784377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.784405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.784536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.784563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.784922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.784951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.785310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.785344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.785672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.785700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.786085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.786114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.786363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.786390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.786551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.786581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.786915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.786945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.787295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.787323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.787648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.787675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.788017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.788045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.788399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.788426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.788790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.788818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.789166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.789195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.789315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-11-20 08:31:18.789342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-11-20 08:31:18.789698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.789726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.790056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.790085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.790453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.790480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.790857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.790894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.791018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.791048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.791271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.791300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.791651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.791679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.791906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.791938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.792318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.792347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.792677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.792706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.793054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.793084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.793403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.793431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.793795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.793823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.794019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.794048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.794388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.794417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.794746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.794773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.795012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.795045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.795422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.795450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.795820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.795848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.796213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.796241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.796590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.796618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.796942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.796972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.797321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.797348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.797701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.797730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.798064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.798092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.798445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.798473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.798835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.798869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.799100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.799136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.799464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.799492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.799819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.799847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.800157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.800186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.800529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.800556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.800953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.800983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.801288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.801316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.801665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.801693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.802058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.802087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.802414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.802442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-11-20 08:31:18.802792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-11-20 08:31:18.802821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.803185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.803215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.803550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.803579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.803973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.804002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.804341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.804370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.804703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.804732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.804965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.804998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.805341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.805370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.805721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.805750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.806108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.806140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.806263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.806291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.806501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.806529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.806893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.806923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.807312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.807340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.807697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.807725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.808098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.808127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.808496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.808524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.808925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.808956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.809301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.809335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.809542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.809570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.809947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.809976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.810319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.810347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.810698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.810725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.811077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.811105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.811275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.811305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.811684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.811711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.812077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.812106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.812429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.812457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.812810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.812837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.813235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.813263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-11-20 08:31:18.813611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-11-20 08:31:18.813645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Write completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Write completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Write completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Write completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Write completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Write completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Write completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Read completed with error (sct=0, sc=8) 00:34:14.140 starting I/O failed 00:34:14.140 Write completed with error (sct=0, sc=8) 00:34:14.141 starting I/O failed 00:34:14.141 Write completed with error (sct=0, sc=8) 00:34:14.141 starting I/O failed 00:34:14.141 Write completed with error (sct=0, sc=8) 00:34:14.141 starting I/O failed 00:34:14.141 Write completed with error (sct=0, sc=8) 00:34:14.141 starting I/O failed 00:34:14.141 Read completed with error (sct=0, sc=8) 00:34:14.141 starting I/O failed 00:34:14.141 Read completed with error (sct=0, sc=8) 00:34:14.141 starting I/O failed 00:34:14.141 Write completed with error (sct=0, sc=8) 00:34:14.141 starting I/O failed 00:34:14.141 Read completed with error (sct=0, sc=8) 00:34:14.141 starting I/O failed 00:34:14.141 Read completed with error (sct=0, sc=8) 00:34:14.141 starting I/O failed 00:34:14.141 [2024-11-20 08:31:18.813949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:14.141 [2024-11-20 08:31:18.814402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-11-20 08:31:18.814439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-11-20 08:31:18.814759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-11-20 08:31:18.814771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-11-20 08:31:18.815175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-11-20 08:31:18.815212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-11-20 08:31:18.815548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-11-20 08:31:18.815559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-11-20 08:31:18.815891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-11-20 08:31:18.815912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-11-20 08:31:18.816222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-11-20 08:31:18.816232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-11-20 08:31:18.816529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-11-20 08:31:18.816539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-11-20 08:31:18.816869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-11-20 08:31:18.816879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-11-20 08:31:18.817191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-11-20 08:31:18.817201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-11-20 08:31:18.817475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-11-20 08:31:18.817485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-11-20 08:31:18.817794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-11-20 08:31:18.817803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.033004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.033065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.033437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.033449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.033789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.033801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.034133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.034147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.034487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.034500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.034881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.034894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.035323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.035386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.035767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.035781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.036239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.036303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.036646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.036660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.037126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.037189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.037463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.037477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.037716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.037728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.038066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.038079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.038300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.038312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.038645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.038656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.039001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.039012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.039248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.039261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.039628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.039639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.039995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.405 [2024-11-20 08:31:19.040007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.405 qpair failed and we were unable to recover it. 00:34:14.405 [2024-11-20 08:31:19.040322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.040333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.040647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.040658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.041056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.041074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.041299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.041310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.041668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.041679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.042033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.042045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.042393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.042404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.042713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.042724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.043061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.043073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.043472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.043483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.043844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.043855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.044178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.044189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.044530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.044541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.044760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.044771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.045112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.045123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.045439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.045450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.045822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.045832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.046135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.046147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.046496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.046506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.046768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.046778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.046996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.047008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.047380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.047391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.047621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.047633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.047922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.047933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.048273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.048284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.048692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.048702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.048908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.048919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.049287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.049297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.049614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.049624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.050026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.050040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.050395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.050405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.050734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.050746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.051092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.051105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.051464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.406 [2024-11-20 08:31:19.051475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.406 qpair failed and we were unable to recover it. 00:34:14.406 [2024-11-20 08:31:19.051804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.051814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.052149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.052161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.052546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.052556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.052768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.052779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.053137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.053149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.053478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.053488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.053835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.053845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.054178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.054191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.054409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.054420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.054745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.054757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.055104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.055116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.055440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.055451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.055767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.055778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.056099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.056110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.056440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.056450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.056835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.056845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.057164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.057175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.057400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.057410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.057629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.057641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.057965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.057977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.058288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.058299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.058649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.058660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.058999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.059011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.059407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.059418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.059742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.059753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.059985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.059995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.060344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.060354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.060680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.060691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.061033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.061045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.061366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.061376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.061603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.061613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.061947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.061959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.062298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.062308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.062665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.062676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.062867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.062877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.063236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.063247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.063574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.407 [2024-11-20 08:31:19.063588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.407 qpair failed and we were unable to recover it. 00:34:14.407 [2024-11-20 08:31:19.063918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.063930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.064252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.064264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.064462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.064473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.064819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.064831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.065191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.065202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.065545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.065556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.065922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.065935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.066026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.066036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.066334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.066346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.066678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.066689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.067059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.067070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.067389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.067400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.067750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.067761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.067990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.068001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.068308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.068320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.068656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.068667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.069014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.069024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.069380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.069392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.069610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.069620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.069981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.069991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.070341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.070352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.070676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.070688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.070917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.070929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.071187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.071198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.071420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.071430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.071769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.071779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.072087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.072107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.072444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.072454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.072779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.072789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.073150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.073162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.073513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.073524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.073850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.408 [2024-11-20 08:31:19.073864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.408 qpair failed and we were unable to recover it. 00:34:14.408 [2024-11-20 08:31:19.074055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.074066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.074332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.074342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.074555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.074565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.074883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.074895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.075234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.075244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.075593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.075604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.075965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.075976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.076304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.076314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.076664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.076682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.077051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.077062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.077387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.077397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.077721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.077731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.078069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.078079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.078420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.078430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.078796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.078806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.079110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.079123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.079442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.079453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.079660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.079670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.079894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.079905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.080245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.080255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.080618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.080629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.080981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.080992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.081337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.081348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.081673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.081684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.081953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.081964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.082283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.082295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.082494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.082506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.082813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.082824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.083186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.083197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.083498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.083510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.083821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.083833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.084189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.084200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.084502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.084514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.084841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.084853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.085080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.085092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.085320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.085336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.085663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.409 [2024-11-20 08:31:19.085675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.409 qpair failed and we were unable to recover it. 00:34:14.409 [2024-11-20 08:31:19.086089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.086100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.086445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.086455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.086796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.086807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.087156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.087167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.087495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.087506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.087820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.087832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.088197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.088208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.088544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.088555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.088956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.088967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.089278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.089289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.089639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.089649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.089982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.089993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.090338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.090348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.090698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.090709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.091083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.091095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.091389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.091400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.091738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.091748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.092082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.092093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.092421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.092432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.092745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.092755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.093080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.093090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.093419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.093436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.093820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.093832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.094150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.094163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.094345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.094358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.094698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.094713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.095067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.095078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.410 qpair failed and we were unable to recover it. 00:34:14.410 [2024-11-20 08:31:19.095412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.410 [2024-11-20 08:31:19.095424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.095768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.095780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.096130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.096142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.096499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.096511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.096839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.096850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.097203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.097215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.097575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.097587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.097918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.097929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.098255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.098265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.098602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.098613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.098966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.098977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.099300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.099311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.099651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.099661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.100067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.100078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.100290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.100300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.100614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.100625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.100975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.100986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.101317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.101327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.101626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.101636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.102007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.102019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.102351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.102361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.102692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.102704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.103055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.103065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.103347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.103357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.103568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.103578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.103911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.103922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.104281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.104292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.104592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.104610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.104833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.104843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.105078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.105089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.105439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.105449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.105842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.105852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.106188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.106199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.106539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.106549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.106905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.106923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.107136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.107148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.107374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.107385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.107725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.107737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.411 qpair failed and we were unable to recover it. 00:34:14.411 [2024-11-20 08:31:19.108059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.411 [2024-11-20 08:31:19.108071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.108392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.108405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.108741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.108752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.108948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.108959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.109135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.109146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.109442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.109452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.109777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.109787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.109983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.109996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.110230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.110241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.110523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.110533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.110870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.110880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.111195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.111206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.111545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.111555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.111745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.111756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.112050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.112061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.112425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.112436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.112753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.112763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.113101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.113113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.113443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.113453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.113859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.113875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.114205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.114216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.114501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.114511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.114735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.114745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.115099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.115110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.115418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.115429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.115755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.115765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.116107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.116119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.116444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.116454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.116759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.116770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.117122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.117132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.117451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.117461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.117785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.117797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.118020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.118031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.118356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.118366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.118684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.118695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.119024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.119034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.119349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.119360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.119679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.119689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.412 [2024-11-20 08:31:19.119995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.412 [2024-11-20 08:31:19.120007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.412 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.120332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.120343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.120697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.120708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.121038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.121049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.121376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.121387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.121725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.121744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.122087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.122097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.122436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.122448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.122643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.122654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.122992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.123003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.123246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.123256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.123598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.123608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.123937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.123948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.124288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.124299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.124653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.124663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.125007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.125019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.125226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.125236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.125581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.125591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.125952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.125963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.126300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.126311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.126642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.126654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.127004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.127015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.127411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.127421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.127619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.127630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.127968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.127979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.128372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.128383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.128714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.128724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.413 [2024-11-20 08:31:19.129044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.413 [2024-11-20 08:31:19.129055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.413 qpair failed and we were unable to recover it. 00:34:14.689 [2024-11-20 08:31:19.129417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.689 [2024-11-20 08:31:19.129431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.689 qpair failed and we were unable to recover it. 00:34:14.689 [2024-11-20 08:31:19.129764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.689 [2024-11-20 08:31:19.129775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.689 qpair failed and we were unable to recover it. 00:34:14.689 [2024-11-20 08:31:19.130125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.689 [2024-11-20 08:31:19.130138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.689 qpair failed and we were unable to recover it. 00:34:14.689 [2024-11-20 08:31:19.130452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.689 [2024-11-20 08:31:19.130467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.689 qpair failed and we were unable to recover it. 00:34:14.689 [2024-11-20 08:31:19.130778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.689 [2024-11-20 08:31:19.130788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.689 qpair failed and we were unable to recover it. 00:34:14.689 [2024-11-20 08:31:19.131108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.689 [2024-11-20 08:31:19.131120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.689 qpair failed and we were unable to recover it. 00:34:14.689 [2024-11-20 08:31:19.131468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.131477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.131806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.131818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.132160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.132171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.132394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.132404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.132722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.132732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.133093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.133106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.133421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.133431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.133730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.133741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.134040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.134051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.134452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.134462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.134646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.134657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.134993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.135004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.135379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.135390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.135705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.135716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.136042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.136052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.136352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.136362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.136696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.136706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.137054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.137065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.137285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.137295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.137642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.137652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.137990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.138001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.138316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.138327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.138639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.138649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.139046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.139056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.139335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.139345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.139694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.139705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.140058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.140070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.140297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.140308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.140657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.140668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.140986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.140996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.141329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.141339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.141750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.141760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.142066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.142076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.142399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.142409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.142738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.142749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.143084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.143094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.143426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.143437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.143749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.143759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.143947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.143961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.144333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.144343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.144641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.144651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.145006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.145016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.145325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.145335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.145646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.145656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.145964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.145975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.146303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.146313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.146641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.146652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.146986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.146997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.147302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.147313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.147653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.147664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.147982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.147993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.148222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.148232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.148556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.148566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.148902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.148915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.149168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.149178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.149477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.149489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.149839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.149849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.150190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.150202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.150561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.150571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.150945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.150956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.151253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.151264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.151622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.151632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.151901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.151912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.152080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.152091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.690 [2024-11-20 08:31:19.152270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.690 [2024-11-20 08:31:19.152281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.690 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.152625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.152637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.152830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.152840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.153169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.153180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.153504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.153514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.153751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.153761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.154090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.154100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.154399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.154410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.154729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.154739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.155046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.155057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.155387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.155398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.155711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.155721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.156032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.156050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.156431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.156442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.156730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.156740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.157090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.157101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.157431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.157443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.157791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.157801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.158139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.158151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.158468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.158478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.158815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.158827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.159152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.159162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.159555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.159566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.159913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.159924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.160056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.160067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.160408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.160418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.160747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.160758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.161108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.161118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.161458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.161469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.161819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.161829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.162169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.162181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.162502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.162512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.162731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.162741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.163079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.163090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.163404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.163414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.163752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.163763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.164083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.164093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.164438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.164449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.164789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.164800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.165125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.165138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.165362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.165373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.165668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.165679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.166025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.166038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.166257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.166267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.166598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.166607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.166926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.166937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.167262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.167272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.167654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.167664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.167979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.167990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.168180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.168191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.168465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.168475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.168791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.168801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.169121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.169132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.169520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.169530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.169853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.169867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.170243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.170253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.170550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.170561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.170913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.170924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.171240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.171251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.171565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.171574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.171887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.171898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.172191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.172201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.172346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.172355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.172647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.172657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.172986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.172996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.173390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.173400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.173732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.173742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.174052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.174062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.174380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.174390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.174733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.174746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.691 [2024-11-20 08:31:19.175092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.691 [2024-11-20 08:31:19.175102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.691 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.175311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.175322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.175680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.175690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.176004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.176014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.176333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.176343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.176665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.176675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.176968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.176978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.177304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.177314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.177611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.177622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.177936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.177947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.178271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.178281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.178610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.178619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.178931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.178941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.179264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.179275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.179669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.179679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.180019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.180030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.180345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.180355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.180682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.180693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.181007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.181018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.181402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.181412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.181826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.181836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.182019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.182030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.182353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.182363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.182685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.182695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.183014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.183025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.183341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.183351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.183689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.183701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.184070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.184081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.184388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.184398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.184715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.184725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.185022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.185039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.185379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.185389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.185703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.185713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.186029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.186039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.186360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.186370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.186715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.186725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.187122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.187134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.187450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.187460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.187775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.187785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.188102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.188114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.188433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.188446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.188782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.188793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.189123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.189135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.189481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.189492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.189859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.189876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.190101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.190111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.190353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.190364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.190727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.190738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.190953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.190963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.191294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.191304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.191696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.191706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.192038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.192050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.192367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.192377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.192683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.192694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.193037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.193049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.193348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.193359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.193689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.193701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.194030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.194043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.194357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.194367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.194558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.194570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.194929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.194940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.195251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.195264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.195579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.195589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.692 [2024-11-20 08:31:19.195969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.692 [2024-11-20 08:31:19.195980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.692 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.196330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.196340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.196681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.196691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.196908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.196918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.197090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.197103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.197435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.197445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.197657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.197667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.197990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.198001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.198300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.198311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.198628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.198638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.198921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.198933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.199206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.199216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.199515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.199526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.199841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.199851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.200167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.200178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.200532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.200543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.200839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.200850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.201190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.201201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.201523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.201543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.201909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.201923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.202237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.202248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.202414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.202425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.202704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.202715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.203022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.203033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.203350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.203361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.203675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.203685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.204038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.204049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.204345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.204356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.204701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.204711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.205017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.205027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.205348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.205358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.205722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.205732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.206035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.206046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.206366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.206377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.206764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.206774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.207090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.207101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.207437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.207447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.207840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.207851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.208059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.208070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.208417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.208427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.208620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.208630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.208970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.208981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.209305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.209316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.209533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.209544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.209872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.209884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.210213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.210225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.210548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.210559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.210786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.210796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.211113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.211124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.211414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.211424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.211717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.211728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.212043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.212053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.212352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.212362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.212676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.212687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.213018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.213029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.213345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.213356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.213729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.213740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.214033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.214043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.214249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.214259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.214570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.214581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.214872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.214883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.215209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.215220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.215527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.215537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.215866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.215878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.216199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.216209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.216516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.216526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.216841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.216852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.217202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.217212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.217543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.217555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.217865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.217877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.218189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.218199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.218523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.218534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.693 [2024-11-20 08:31:19.218878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.693 [2024-11-20 08:31:19.218889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.693 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.219218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.219229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.219544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.219553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.219872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.219884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.220198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.220208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.220616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.220627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.220972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.220983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.221287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.221297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.221602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.221613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.221932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.221942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.222258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.222268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.222657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.222667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.222967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.222978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.223298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.223308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.223618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.223629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.223816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.223827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.224169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.224180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.224506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.224516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.224829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.224840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.225166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.225178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.225468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.225479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.225804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.225814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.226130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.226149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.226332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.226344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.226728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.226742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.227039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.227049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.227370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.227380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.227699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.227710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.228037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.228048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.228353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.228363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.228695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.228705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.229023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.229034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.229415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.229425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.229744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.229755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.230104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.230116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.230425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.230445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.230756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.230767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.231097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.231108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.231423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.231433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.231750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.231760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.232090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.232101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.232290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.232304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.232659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.232670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.232984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.232995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.233343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.233353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.233553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.233564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.233903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.233914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.234207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.234217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.234544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.234555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.234852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.234865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.235155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.235166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.235476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.235486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.235874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.235884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.236219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.236230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.236549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.236560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.236893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.236905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.237119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.237131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.237429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.237440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.237753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.237763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.238081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.238092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.238436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.238445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.238737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.238748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.239037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.239048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.239356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.239367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.239686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.239698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.240010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.240021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.240339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.240350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.694 [2024-11-20 08:31:19.240675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.694 [2024-11-20 08:31:19.240686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.694 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.240916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.240927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.241222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.241233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.241614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.241625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.241918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.241928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.242311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.242322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.242606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.242617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.242937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.242948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.243271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.243282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.243577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.243588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.243980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.243992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.244340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.244351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.244652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.244662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.244986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.244998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.245244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.245254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.245622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.245634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.245919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.245929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.246246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.246256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.246573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.246584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.246898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.246911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.247160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.247169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.247461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.247472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.247825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.247835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.248208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.248218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.248557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.248567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.248911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.248922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.249154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.249165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.249487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.249497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.249823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.249834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.250195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.250206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.250517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.250528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.250843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.250854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.251080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.251091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.251408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.251419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.251719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.251730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.252024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.252043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.252355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.252366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.252656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.252667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.252985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.252996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.253218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.253229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.253559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.253569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.253786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.253796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.254089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.254102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.254411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.254420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.254740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.254751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.255082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.255094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.255405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.255416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.255730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.255741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.256081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.256092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.256397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.256409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.256758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.256767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.257082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.257096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.257433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.257443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.257722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.257733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.257953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.257965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.258334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.258344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.258670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.258681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.259026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.259037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.259356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.259367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.259679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.259689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.260006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.260017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.260320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.260330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.260635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.260645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.260955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.260966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.261283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.261294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.261583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.261593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.261837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.261847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.262239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.262250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.262580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.262592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.262940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.262951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.263265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.263275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.263590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.695 [2024-11-20 08:31:19.263601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.695 qpair failed and we were unable to recover it. 00:34:14.695 [2024-11-20 08:31:19.263930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.263942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.264254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.264264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.264557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.264574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.264887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.264899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.265214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.265226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.265608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.265618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.265912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.265923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.266245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.266256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.266570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.266581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.266876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.266887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.267211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.267222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.267536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.267550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.267873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.267884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.268191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.268201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.268591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.268602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.268921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.268932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.269253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.269264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.269649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.269660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.269989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.270001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.270224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.270235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.270443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.270454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.270815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.270826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.271127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.271139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.271467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.271478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.271788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.271799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.272115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.272127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.272433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.272443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.272648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.272658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.272900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.272912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.273297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.273307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.273588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.273599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.273911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.273923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.274238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.274248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.274588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.274599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.274910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.274923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.275241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.275252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.275611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.275622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.275968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.275979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.276304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.276324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.276511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.276520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.276841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.276852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.277059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.277070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.277386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.277398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.277731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.277741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.278065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.278077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.278370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.278382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.278696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.278706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.278945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.278956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.279246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.279256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.279470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.279480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.279829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.279841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.280145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.280155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.280462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.280472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.280754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.280764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.281096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.281106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.281302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.281312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.281524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.281535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.281836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.281847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.282141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.282152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.282368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.282380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.282580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.282591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.282849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.282873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.283158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.283169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.283444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.283454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.283787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.283798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.284116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.284127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.284415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.284425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.284602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.284613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.284928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.284939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.285259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.285269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.285591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.285602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.285940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.285952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.696 [2024-11-20 08:31:19.286285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.696 [2024-11-20 08:31:19.286295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.696 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.286636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.286648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.286855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.286874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.287222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.287232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.287519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.287529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.287930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.287943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.288237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.288249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.288561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.288575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.288879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.288890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.289194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.289204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.289393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.289405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.289771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.289782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.290120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.290132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.290477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.290490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.290792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.290806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.291080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.291092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.291407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.291419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.291653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.291663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.291975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.291987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.292304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.292315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.292619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.292631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.292938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.292949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.293276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.293287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.293480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.293491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.293784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.293795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.294098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.294110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.294448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.294459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.294758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.294769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.295085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.295096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.295343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.295354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.295669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.295680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.295927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.295938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.296222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.296232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.296576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.296587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.296860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.296879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.297089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.297099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.297333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.297346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.297671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.297682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.298043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.298054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.298257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.298267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.298599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.298609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.299020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.299031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.299313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.299323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.299654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.299665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.299915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.299925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.300259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.300269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.300597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.300607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.300914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.300925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.301114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.301124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.301467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.301486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.301807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.301818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.302044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.302056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.302377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.302389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.302704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.302714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.303016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.303029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.303348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.303359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.303674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.303685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.304017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.304028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.304246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.304256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.304578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.304588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.304914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.304925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.305171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.305181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.305387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.305397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.305691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.305702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.306072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.306087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.306386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.306396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.306717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.697 [2024-11-20 08:31:19.306728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.697 qpair failed and we were unable to recover it. 00:34:14.697 [2024-11-20 08:31:19.307016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.307027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.307327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.307339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.307558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.307568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.307944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.307954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.308293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.308303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.308614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.308624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.308931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.308942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.309135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.309145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.309473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.309485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.309812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.309822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.310019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.310030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.310239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.310249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.310532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.310542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.310818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.310828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.311156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.311167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.311368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.311379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.311655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.311665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.312013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.312024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.312314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.312325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.312617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.312628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.312919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.312930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.313177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.313187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.313508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.313518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.313703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.313713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.314093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.314104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.314388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.314398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.314715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.314725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.315049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.315060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.315378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.315389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.315693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.315702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.315994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.316005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.316268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.316278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.316491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.316501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.316801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.316811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.317146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.317156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.317342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.317353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.317766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.317776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.318077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.318088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.318376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.318386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.318718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.318729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.319048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.319059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.319332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.319342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.319615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.319624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.319943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.319955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.320270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.320279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.320603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.320614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.320930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.320941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.321250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.321260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.321580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.321590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.321913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.321925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.322277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.322288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.322588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.322598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.322885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.322896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.323147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.323159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.323475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.323485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.323869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.323879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.324118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.324128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.324464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.324474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.324768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.324779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.325087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.325098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.325431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.325443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.325757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.325767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.325939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.325949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.326188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.326198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.326398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.326408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.326600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.326610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.698 qpair failed and we were unable to recover it. 00:34:14.698 [2024-11-20 08:31:19.326945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.698 [2024-11-20 08:31:19.326956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.327302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.327313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.327695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.327706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.328052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.328063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.328382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.328392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.328670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.328680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.328976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.328986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.329353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.329364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.329682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.329693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.330021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.330032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.330346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.330359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.330696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.330706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.331035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.331045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.331260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.331270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.331579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.331588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.331902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.331912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.332229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.332240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.332523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.332534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.332714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.332723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.333001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.333012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.333338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.333348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.333648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.333658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.333890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.333902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.334239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.334250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.334531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.334548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.334866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.334876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.335256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.335266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.335468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.335478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.335680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.335690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.335975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.335985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.336234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.336245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.336566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.336577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.336871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.336882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.337127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.337137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.337470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.337480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.337811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.337820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.338105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.338115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.338411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.338422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.338724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.338735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.338992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.339003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.339318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.339329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.339642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.339652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.339936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.339946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.340283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.340293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.340622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.340632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.340801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.340811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.341085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.341095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.341426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.341437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.341735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.341746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.342043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.342053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.342267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.342276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.342364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.342374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.342657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.342667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.342971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.342981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.343186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.343197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.343495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.343505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.343829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.343839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.344065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.344076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.344365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.344375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.344709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.344720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.345033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.345043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.345328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.345341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.345555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.345566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.345881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.345891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.346207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.346217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.346561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.346571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.346909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.699 [2024-11-20 08:31:19.346919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.699 qpair failed and we were unable to recover it. 00:34:14.699 [2024-11-20 08:31:19.347208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.347218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.347418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.347429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.347710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.347720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.348032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.348042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.348363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.348373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.348586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.348596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.348870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.348881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.349178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.349188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.349486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.349497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.349827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.349838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.350143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.350155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.350339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.350353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.350547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.350558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.350866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.350878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.351172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.351182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.351389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.351399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.351744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.351754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.352079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.352091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.352404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.352415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.352727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.352738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.353020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.353030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.353222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.353232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.353567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.353577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.353871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.353882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.354204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.354213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.354508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.354519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.354859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.354883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.355191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.355202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.355495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.355505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.355782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.355792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.356098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.356109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.356399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.356411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.356733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.356743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.357030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.357040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.357373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.357384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.357686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.357696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.358024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.358035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.358358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.358368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.358571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.358581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.358907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.358918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.358984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.358994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.359232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.359242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.359557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.359567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.359877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.359888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.360212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.360222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.360546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.360556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.360886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.360896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.361219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.361229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.361516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.361527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.361883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.361894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.362192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.362204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.362520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.362530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.362848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.362861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.363205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.363215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.363526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.363538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.363723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.363733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.363886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.363897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.364184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.364194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.364498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.364508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.364692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.364702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.364910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.364922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.365220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.365230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.365526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.365538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.365802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.365811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.366097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.366109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.366428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.366438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.366649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.366659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.366973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.366984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.367316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.367326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.700 [2024-11-20 08:31:19.367641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.700 [2024-11-20 08:31:19.367653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.700 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.367963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.367973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.368250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.368261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.368588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.368599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.368808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.368819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.369015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.369025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.369365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.369374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.369676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.369687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.370016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.370026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.370365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.370376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.370648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.370661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.370951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.370962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.371280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.371290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.371584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.371594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.371902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.371913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.372234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.372244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.372451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.372461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.372787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.372797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.373042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.373053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.373356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.373366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.373630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.373640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.373975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.373985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.374283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.374293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.374625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.374635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.374829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.374840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.375148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.375159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.375514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.375524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.375838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.375848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.376119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.376130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.376302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.376311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.376629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.376640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.376970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.376982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.377292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.377304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.377611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.377622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.377939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.377949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.378257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.378267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.378465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.378475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.378815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.378825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.379110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.379121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.379421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.379431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.379726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.379737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.380034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.380045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.380241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.380251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.380591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.380601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.380897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.380908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.381233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.381243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.381447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.381457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.381687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.381698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.381789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.381799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.382343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.382436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.382931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.382985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.383224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.383266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.383530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.383560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.383939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.383986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.384291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.384319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.384712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.384741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.385149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.385180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.385423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.385455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.385683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.385714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.386061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.386091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.386431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.386460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.386805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.386835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.386986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.387014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.387403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.387433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.387755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.387784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.388129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.388162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.388576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.388606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.388813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.388842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.389328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.389358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.389709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.389738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.390091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.390119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.390491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.701 [2024-11-20 08:31:19.390521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.701 qpair failed and we were unable to recover it. 00:34:14.701 [2024-11-20 08:31:19.390880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.390910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.391276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.391305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.391564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.391593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.391955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.391987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.392250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.392278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.392649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.392679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.393038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.393070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.393423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.393452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.393798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.393826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.394145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.394175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.394535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.394564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.394911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.394940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.395258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.395287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.395600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.395629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.395979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.396010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.396404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.396432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.396748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.396777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.397128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.397158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.397507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.397535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.397927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.397962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.398250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.398279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.398659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.398689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.399076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.399106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.399444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.399475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.399789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.399819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.400201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.400230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.400472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.400500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.400762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.400791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.401145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.401175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.401552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.401581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.401931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.401960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.402191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.402219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.402601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.402629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.402910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.402939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.403311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.403339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.403680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.403708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.404098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.404128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.702 [2024-11-20 08:31:19.404372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.702 [2024-11-20 08:31:19.404400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.702 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.404668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.404699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.404959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.404990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.405388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.405417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.405646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.405677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.405929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.405959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.406338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.406367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.406715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.406745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.407119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.407149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.407504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.407533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.407880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.407911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.408249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.408278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.408598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.408627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.408981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.409012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.409227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.409258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.409632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.409660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.410001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.410031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.410292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.410323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.410690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.410719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.410974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.411004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.411361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.411389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.411739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.411767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.412135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.412179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.412498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.412527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.412895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.412924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.413310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.413339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.413693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.413721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.414089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.974 [2024-11-20 08:31:19.414119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.974 qpair failed and we were unable to recover it. 00:34:14.974 [2024-11-20 08:31:19.414480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.414509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.414854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.414911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.415270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.415299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.415645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.415674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.416058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.416089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.416438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.416467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.416591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.416624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.416955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.416984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.417354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.417382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.417625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.417653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.417906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.417937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.418314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.418344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.418590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.418619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.419007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.419037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.419389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.419422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.419680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.419708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.420104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.420135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.420352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.420383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.420619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.420647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.421036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.421065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.421414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.421443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.421750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.421779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.422164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.422195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.422561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.422590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.422941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.422971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.423323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.423352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.423674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.423703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.424087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.424117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.424463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.424492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.424830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.424860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.425222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.425251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.425580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.425608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.426098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.426128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.426455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.426485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.426829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.426871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.427201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.427230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.427598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.427627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.975 [2024-11-20 08:31:19.427982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.975 [2024-11-20 08:31:19.428011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.975 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.428379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.428408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.428761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.428791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.429068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.429097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.429346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.429375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.429739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.429768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.430157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.430186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.430592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.430621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.430836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.430882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.431252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.431281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.431622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.431651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.431995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.432025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.432380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.432408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.432738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.432767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.433005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.433037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.433392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.433421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.433779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.433806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.434157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.434187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.434523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.434552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.434913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.434943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.435327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.435357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.435654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.435682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.436016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.436047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.436392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.436421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.436678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.436706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.437045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.437076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.437306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.437336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.437694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.437723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.438097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.438127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.438476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.438504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.438904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.438936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.439291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.439320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.439677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.439705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.440039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.440070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.440414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.440443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.440800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.440828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.441155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.441184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.441321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.441355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.976 qpair failed and we were unable to recover it. 00:34:14.976 [2024-11-20 08:31:19.441710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.976 [2024-11-20 08:31:19.441739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.442121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.442150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.442524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.442553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.442892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.442923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.443264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.443292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.443671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.443699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.444061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.444091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.444435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.444463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.444792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.444821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.445181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.445211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.445545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.445573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.445926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.445956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.446296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.446332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.446683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.446712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.447057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.447087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.447442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.447472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.447833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.447873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.448203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.448232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.448622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.448649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.448982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.449013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.449375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.449404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.449844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.449881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.450213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.450242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.450577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.450606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.451002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.451032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.451410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.451440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.451793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.451824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.452054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.452086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.452445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.452474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.452816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.452845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.453063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.453091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.453472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.453501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.453838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.453876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.454213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.454242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.454589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.454618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.454950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.454981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.455317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.455346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.455504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.977 [2024-11-20 08:31:19.455533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.977 qpair failed and we were unable to recover it. 00:34:14.977 [2024-11-20 08:31:19.455893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.455923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.456274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.456309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.456631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.456660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.457024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.457054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.457407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.457436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.457767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.457796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.458114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.458144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.458500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.458529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.458887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.458918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.459278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.459306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.459661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.459690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.460021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.460050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.460398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.460427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.460848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.460885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.461190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.461219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.461599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.461627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.462019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.462049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.462293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.462324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.462589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.462618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.462997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.463027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.463393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.463422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.463729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.463758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.464111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.464141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.464515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.464544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.464911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.464941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.465291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.465319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.465678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.465706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.466077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.466108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.978 [2024-11-20 08:31:19.466442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.978 [2024-11-20 08:31:19.466472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.978 qpair failed and we were unable to recover it. 00:34:14.979 [2024-11-20 08:31:19.466792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.979 [2024-11-20 08:31:19.466820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.979 qpair failed and we were unable to recover it. 00:34:14.979 [2024-11-20 08:31:19.467152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.979 [2024-11-20 08:31:19.467182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.979 qpair failed and we were unable to recover it. 00:34:14.979 [2024-11-20 08:31:19.467436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.979 [2024-11-20 08:31:19.467464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.979 qpair failed and we were unable to recover it. 00:34:14.979 [2024-11-20 08:31:19.467815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.979 [2024-11-20 08:31:19.467844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.979 qpair failed and we were unable to recover it. 00:34:14.979 [2024-11-20 08:31:19.468194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.979 [2024-11-20 08:31:19.468223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.979 qpair failed and we were unable to recover it. 00:34:14.979 [2024-11-20 08:31:19.468554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.979 [2024-11-20 08:31:19.468582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.979 qpair failed and we were unable to recover it. 00:34:14.979 [2024-11-20 08:31:19.468842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.979 [2024-11-20 08:31:19.468877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.979 qpair failed and we were unable to recover it. 00:34:14.979 [2024-11-20 08:31:19.469127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.979 [2024-11-20 08:31:19.469156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.979 qpair failed and we were unable to recover it. 00:34:14.979 [2024-11-20 08:31:19.469489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.979 [2024-11-20 08:31:19.469518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.979 qpair failed and we were unable to recover it. 00:34:14.979 [2024-11-20 08:31:19.469880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.979 [2024-11-20 08:31:19.469909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.979 qpair failed and we were unable to recover it. 00:34:14.979 [2024-11-20 08:31:19.470258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.979 [2024-11-20 08:31:19.470286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.979 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.470638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.470667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.471022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.471058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.471276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.471307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.471596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.471625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.473417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.473473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.473874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.473908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.474261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.474291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.474647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.474677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.475037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.475067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.475416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.475444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.475806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.475835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.476176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.476206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.476555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.476583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.476900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.476930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.477274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.477303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.477537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.477568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.477905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.477936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.478308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.478336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.478673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.478703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.479077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.479107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.479513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.479542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.479899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.479929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.480251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.480281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.480637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.480667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.481008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.481038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.481389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.481425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.481772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.481800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.482152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.482182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.482519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.482549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.482906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.482936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.483298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.483326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.483686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.483715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.484070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.484103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.484521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.484550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.484894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.484924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.485280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.485310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.485659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.980 [2024-11-20 08:31:19.485689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.980 qpair failed and we were unable to recover it. 00:34:14.980 [2024-11-20 08:31:19.486038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.486068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.486423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.486451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.486767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.486797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.487166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.487196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.487548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.487583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.487984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.488014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.488376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.488406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.488739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.488768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.489136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.489167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.489514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.489544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.489856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.489894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.490250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.490280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.490603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.490632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.490786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.490816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.491204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.491234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.491494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.491523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.491857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.491895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.492320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.492348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.492589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.492618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.492955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.492985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.493344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.493372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.493781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.493810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.494172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.494202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.494548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.494578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.494923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.494953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.495302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.495331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.495687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.495716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.496088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.496119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.496341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.496369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.496734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.496763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.497071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.497102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.497453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.497482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.497822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.497851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.498236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.498266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.498626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.498655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.499019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.499049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.499401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.499430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.499799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.981 [2024-11-20 08:31:19.499828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.981 qpair failed and we were unable to recover it. 00:34:14.981 [2024-11-20 08:31:19.500189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.500219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.500561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.500591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.500936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.500966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.501404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.501433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.501781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.501810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.502165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.502201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.502520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.502554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.502902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.502932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.503279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.503309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.503660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.503689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.504031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.504061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.504428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.504456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.504822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.504851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.505188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.505217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.505546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.505575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.505923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.505954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.506212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.506240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.506580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.506608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.506956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.506986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.507330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.507359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.507709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.507739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.508083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.508113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.508459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.508488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.508703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.508734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.509113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.509143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.509455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.509485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.509791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.509820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.510167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.510197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.510545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.510575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.510918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.510947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.511293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.511321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.511688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.511717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.512063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.512093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.512338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.512367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.512600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.512632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.512952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.512983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.513327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.513356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.513709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.982 [2024-11-20 08:31:19.513738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.982 qpair failed and we were unable to recover it. 00:34:14.982 [2024-11-20 08:31:19.514086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.514117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.514439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.514469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.514714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.514747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.515081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.515111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.515460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.515489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.515849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.515885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.516270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.516299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.516671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.516700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.517028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.517064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.517423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.517452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.517805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.517833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.518178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.518208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.518556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.518585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.518941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.518971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.519374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.519403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.519735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.519764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.520101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.520131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.520473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.520502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.520937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.520966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.521287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.521316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.521560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.521589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.521905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.521934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.522173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.522204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.522550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.522578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.522933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.522963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.523339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.523368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.523678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.523707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.523945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.523974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.524331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.524360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.524726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.524754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.525095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.525124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.525488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.525517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.525880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.525910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.526238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.526267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.526616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.526644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.526980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.527011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.527344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.527373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.983 [2024-11-20 08:31:19.527736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.983 [2024-11-20 08:31:19.527765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.983 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.528005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.528038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.528267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.528298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.528665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.528694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.529039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.529069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.529456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.529485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.529733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.529763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.530172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.530203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.530577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.530606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.531005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.531035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.531449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.531478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.531838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.531882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.532140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.532168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.532476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.532505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.532868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.532899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.533256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.533284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.533635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.533664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.534016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.534046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.534312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.534341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.534597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.534629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.534902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.534934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.535294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.535324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.535666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.535696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.536123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.536153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.536362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.536394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.536653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.536683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.536932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.536963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.537345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.537375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.537744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.537773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.538116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.538147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.538369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.538401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.538763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.538792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.984 [2024-11-20 08:31:19.539118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.984 [2024-11-20 08:31:19.539150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.984 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.539516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.539545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.539903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.539933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.540294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.540324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.540691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.540720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.541075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.541104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.541469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.541500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.541851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.541889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.542218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.542248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.542589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.542618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.542902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.542934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.543302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.543331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.543569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.543599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.543961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.543992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.544228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.544257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.544601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.544630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.545023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.545054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.545394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.545423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.545667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.545696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.546031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.546075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.546415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.546444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.546564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.546594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.546824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.546854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.547204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.547233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.547580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.547609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.547978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.548008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.548340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.548370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.548720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.548749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.549108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.549138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.549399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.549428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.549776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.549805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.550168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.550197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.550535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.550565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.550927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.550958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.551312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.551340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.551692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.551720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.551970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.552000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.985 [2024-11-20 08:31:19.552362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.985 [2024-11-20 08:31:19.552391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.985 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.552718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.552748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.553093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.553124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.553473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.553502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.553781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.553809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.554205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.554235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.554587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.554617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.554846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.554883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.555256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.555286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.555630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.555660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.555961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.555990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.556380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.556409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.556759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.556788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.557194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.557223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.557562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.557592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.557961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.557991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.558219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.558247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.558598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.558626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.558978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.559008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.559362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.559392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.559762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.559790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.560164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.560193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.560529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.560559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.560767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.560796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.561210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.561241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.561572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.561601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.561839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.561914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.562224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.562255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.562616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.562644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.562836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.562872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.563143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.563172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.563520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.563548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.564006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.564036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.564367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.564396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.564617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.564645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.564972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.565002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.565408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.565437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.565810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.565838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.986 [2024-11-20 08:31:19.566048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.986 [2024-11-20 08:31:19.566081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.986 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.566439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.566469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.566723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.566752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.567110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.567140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.567373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.567400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.567621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.567650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.568115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.568145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.568509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.568539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.568776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.568808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.569127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.569158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.569398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.569427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.569694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.569729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.569965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.569997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.570336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.570365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.570612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.570642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.570868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.570898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.571247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.571276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.571662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.571691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.571986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.572014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.572370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.572399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.572763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.572793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.573158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.573189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.573560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.573589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.573947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.573977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.574311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.574339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.574597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.574629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.575021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.575051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.575385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.575414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.575657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.575686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.576047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.576078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.576474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.576502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.576871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.576908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.987 qpair failed and we were unable to recover it. 00:34:14.987 [2024-11-20 08:31:19.577238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.987 [2024-11-20 08:31:19.577267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.577620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.577650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.578040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.578079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.578321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.578350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.578696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.578725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.579089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.579119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.579520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.579550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.579957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.579988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.580275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.580303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.580412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.580440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.580762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.580791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.581048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.581077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.581493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.581521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.581854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.581891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.582227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.582256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.582625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.582653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.582929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.582958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.583195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.583224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.583624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.583653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.584056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.584092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.584313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.584342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.584669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.584699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.585061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.585091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.585470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.585499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.585845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.585883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.586248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.586277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.586638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.586668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.587026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.587055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.587167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.587199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.587567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.587596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.587957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.587988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.588314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.588342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.588695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.588725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.589069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.589100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.589465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.589493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.589850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.589886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.590276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.988 [2024-11-20 08:31:19.590305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.988 qpair failed and we were unable to recover it. 00:34:14.988 [2024-11-20 08:31:19.590636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.590665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.591042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.591072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.591273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.591302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.591660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.591689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.592064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.592094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.592467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.592496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.592836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.592872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.593218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.593248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.593605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.593635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.593829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.593860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.594228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.594258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.594628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.594658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.595010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.595040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.595301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.595330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.595664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.595692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.596016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.596047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.596279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.596310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.596657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.596686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.597028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.597058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.597292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.597323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.597715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.597744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.598088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.598118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.598468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.598504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.598902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.598932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.599308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.599337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.599579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.599608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.599926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.599956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.600299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.600327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.600704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.600733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.601093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.601123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.601474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.601504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.601848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.601890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.602219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.602249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.989 qpair failed and we were unable to recover it. 00:34:14.989 [2024-11-20 08:31:19.602585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.989 [2024-11-20 08:31:19.602613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.602942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.602973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.603195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.603227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.603598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.603627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.603984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.604013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.604422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.604451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.604810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.604839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.605189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.605220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.605575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.605604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.605957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.605986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.606344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.606372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.606599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.606628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.607025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.607055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.607305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.607334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.607672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.607701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.607977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.608009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.608377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.608407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.608773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.608801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.609156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.609186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.609581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.609610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.609876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.609909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.610247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.610276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.610624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.610653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.611015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.611046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.611423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.611452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.611798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.611826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.612071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.612100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.612478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.612507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.612855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.612891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.613234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.613268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.613619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.613647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.613983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.614013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.614344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.614373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.614719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.614749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.615152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.615182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.615555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.615584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.615951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.615981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.616322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.990 [2024-11-20 08:31:19.616351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.990 qpair failed and we were unable to recover it. 00:34:14.990 [2024-11-20 08:31:19.616708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.616737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.617124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.617155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.617517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.617546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.617886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.617916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.618163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.618194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.618572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.618602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.618953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.618983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.619357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.619385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.619629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.619661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.619891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.619921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.620282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.620311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.620676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.620705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.620951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.620981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.621342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.621371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.621734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.621764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.622102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.622132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.622502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.622530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.622930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.622960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.623315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.623344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.623639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.623668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.624028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.624060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.624389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.624418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.624775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.624804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.625152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.625182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.625404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.625434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.625788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.625818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.626184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.626215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.626575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.626611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.627046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.627075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.627466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.627495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.627795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.627823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.628194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.628230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.628622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.628651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.629026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.629056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.629422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.629451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.629798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.629827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.630130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.630159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.991 [2024-11-20 08:31:19.630382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.991 [2024-11-20 08:31:19.630413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.991 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.630740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.630771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.631128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.631159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.631581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.631610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.631961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.631991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.632369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.632397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.632754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.632783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.633105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.633135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.633464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.633493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.633737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.633769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.634120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.634151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.634494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.634523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.634892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.634922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.635263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.635292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.635650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.635679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.636034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.636064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.636381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.636409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.636673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.636701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.637071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.637101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.637477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.637506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.637825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.637854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.638215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.638245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.638595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.638624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.639029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.639061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.639397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.639425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.639746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.639775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.640186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.640217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.640569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.640599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.640839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.640879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.641219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.641248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.641603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.641632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.641977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.642007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.642361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.642390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.642754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.642784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.643140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.643177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.643592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.643621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.643855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.643893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.644249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.644279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.992 qpair failed and we were unable to recover it. 00:34:14.992 [2024-11-20 08:31:19.644663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.992 [2024-11-20 08:31:19.644694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.645051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.645082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.645449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.645478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.645825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.645854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.646203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.646231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.646585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.646614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.646981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.647011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.647375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.647404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.647764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.647793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.648022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.648053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.648425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.648454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.648887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.648917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.649268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.649297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.649630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.649659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.650077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.650107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.650457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.650489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.650845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.650881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.651299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.651328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.651686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.651715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.652068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.652098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.652323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.652351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.652697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.652727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.653068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.653098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.653467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.653497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.653847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.653884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.654223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.654252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.654618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.654647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.654889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.654918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.655283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.655313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.655666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.655695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.656030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.656059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.656435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.656464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.656855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.993 [2024-11-20 08:31:19.656892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.993 qpair failed and we were unable to recover it. 00:34:14.993 [2024-11-20 08:31:19.657225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.657254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.657637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.657666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.658002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.658033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.658352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.658386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.658617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.658649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.659013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.659043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.659410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.659438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.659797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.659826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.660225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.660255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.660611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.660641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.661006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.661036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.661261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.661289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.661636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.661664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.661916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.661945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.662283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.662312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.662711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.662739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.663109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.663139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.663477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.663506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.663881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.663912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.664280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.664309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.664664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.664693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.665070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.665099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.665453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.665482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.665885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.665915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.666277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.666305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.666709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.666737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.667091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.667121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.667452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.667481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.667841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.667895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.668228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.668256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.668626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.668656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.669008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.669039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.669280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.669311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.669671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.669701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.670077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.670107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.670469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.994 [2024-11-20 08:31:19.670497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.994 qpair failed and we were unable to recover it. 00:34:14.994 [2024-11-20 08:31:19.670909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.670939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.671217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.671246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.671625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.671654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.671999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.672028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.672295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.672322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.672689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.672717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.672938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.672969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.673276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.673311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.673699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.673729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.673969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.674000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.674348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.674376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.674730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.674759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.675098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.675129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.675475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.675504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.675853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.675890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.676201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.676231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.676547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.676575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.676922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.676953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.677291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.677319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.677549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.677577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.677906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.677936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.678309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.678339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.678671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.678701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.679048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.679078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.679443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.679473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.679819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.679849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.680264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.680294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.680555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.680584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.680957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.680988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.681321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.681350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.681604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.681633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.681991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.682021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.682226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.682257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.682623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.682652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.683013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.683045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.683402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.683431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.683761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.683790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.684142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.995 [2024-11-20 08:31:19.684172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.995 qpair failed and we were unable to recover it. 00:34:14.995 [2024-11-20 08:31:19.684520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.996 [2024-11-20 08:31:19.684550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.996 qpair failed and we were unable to recover it. 00:34:14.996 [2024-11-20 08:31:19.684875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.996 [2024-11-20 08:31:19.684905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.996 qpair failed and we were unable to recover it. 00:34:14.996 [2024-11-20 08:31:19.685268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.996 [2024-11-20 08:31:19.685296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.996 qpair failed and we were unable to recover it. 00:34:14.996 [2024-11-20 08:31:19.685620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.996 [2024-11-20 08:31:19.685648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.996 qpair failed and we were unable to recover it. 00:34:14.996 [2024-11-20 08:31:19.686003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.996 [2024-11-20 08:31:19.686034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.996 qpair failed and we were unable to recover it. 00:34:14.996 [2024-11-20 08:31:19.686332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.996 [2024-11-20 08:31:19.686361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.996 qpair failed and we were unable to recover it. 00:34:14.996 [2024-11-20 08:31:19.686768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.996 [2024-11-20 08:31:19.686796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.996 qpair failed and we were unable to recover it. 00:34:14.996 [2024-11-20 08:31:19.687144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.996 [2024-11-20 08:31:19.687175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.996 qpair failed and we were unable to recover it. 00:34:14.996 [2024-11-20 08:31:19.687611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.996 [2024-11-20 08:31:19.687640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.996 qpair failed and we were unable to recover it. 00:34:14.996 [2024-11-20 08:31:19.687990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.996 [2024-11-20 08:31:19.688027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.996 qpair failed and we were unable to recover it. 00:34:14.996 [2024-11-20 08:31:19.688387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.996 [2024-11-20 08:31:19.688416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.996 qpair failed and we were unable to recover it. 00:34:14.996 [2024-11-20 08:31:19.688777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.996 [2024-11-20 08:31:19.688805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:14.996 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.689202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.689233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.689579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.689611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.689968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.689998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.690331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.690361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.690740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.690768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.691158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.691188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.691412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.691440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.691735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.691769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.692098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.692129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.692493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.692522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.692899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.692929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.693341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.693370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.693726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.693755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.694091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.694121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.694529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.694558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.694906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.694936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.695274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.695302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.695657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.695685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.696053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.696083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.696454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.696484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.696732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.696762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.697183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.697214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.697544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.697573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.697934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.697963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.698315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.698345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.698690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.698719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.271 [2024-11-20 08:31:19.699060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.271 [2024-11-20 08:31:19.699090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.271 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.699440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.699469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.699708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.699740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.700126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.700156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.700519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.700548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.700790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.700820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.701188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.701218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.701470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.701499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.701845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.701890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.702256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.702285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.702638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.702668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.702897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.702935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.703280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.703310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.703712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.703740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.704107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.704137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.704384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.704413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.704803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.704832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.705082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.705111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.705448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.705478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.705829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.705858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.706246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.706278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.706647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.706677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.707046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.707077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.707329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.707357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.707701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.707731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.708101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.708132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.708535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.708564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.708904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.708934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.709267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.709296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.709659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.709688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.710057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.710087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.710453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.710483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.710843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.710891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.272 [2024-11-20 08:31:19.711297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.272 [2024-11-20 08:31:19.711327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.272 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.711685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.711716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.712091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.712122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.712526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.712556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.712950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.712980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.713305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.713333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.713678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.713707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.714035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.714065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.714448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.714476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.714848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.714894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.715155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.715185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.715533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.715562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.715924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.715954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.716355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.716384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.716742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.716771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.717137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.717169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.717522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.717551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.717908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.717939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.718342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.718377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.718730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.718759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.719102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.719133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.719497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.719527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.719859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.719896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.720256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.720286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.720655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.720685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.720982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.721013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.721374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.721404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.721746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.721777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.722123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.722153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.273 [2024-11-20 08:31:19.722514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.273 [2024-11-20 08:31:19.722544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.273 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.722898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.722929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.723330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.723358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.723722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.723753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.724116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.724147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.724510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.724539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.724779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.724809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.725158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.725189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.725451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.725480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.725834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.725871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.726290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.726319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.726667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.726696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.727051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.727081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.727436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.727465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.727818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.727847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.728192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.728223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.728594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.728625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.728989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.729020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.729398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.729427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.729839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.729881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.730246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.730276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.730632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.730661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.731050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.731079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.731512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.731541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.731923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.731953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.732392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.732420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.732561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.732591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.732920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.732951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.733335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.733365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.733603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.733639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.733889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.274 [2024-11-20 08:31:19.733920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.274 qpair failed and we were unable to recover it. 00:34:15.274 [2024-11-20 08:31:19.734320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.734348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.734708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.734737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.735059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.735090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.735432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.735460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.735813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.735842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.736287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.736317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.736645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.736673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.737106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.737136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.737497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.737527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.737696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.737727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.737986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.738016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.738376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.738405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.738832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.738861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.739215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.739245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.739540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.739569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.739920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.739950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.740205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.740234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.740563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.740594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.740944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.740976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.741343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.741372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.741728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.741757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.742153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.742183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.742518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.742548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.742905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.742935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.743301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.743329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.743693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.743723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.744084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.744114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.744473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.744502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.744842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.744879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.745210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.745240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.745601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.745629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.745857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.745896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.746233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.746262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.746621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.746650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.275 [2024-11-20 08:31:19.746995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.275 [2024-11-20 08:31:19.747026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.275 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.747364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.747394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.747724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.747753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.748136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.748165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.748449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.748478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.748807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.748838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.749215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.749245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.749612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.749640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.749996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.750026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.750383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.750411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.750786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.750815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.751161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.751190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.751549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.751579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.751947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.751978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.752199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.752230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.752618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.752647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.753004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.753034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.753425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.753454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.753823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.753854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.754206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.754236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.754596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.754631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.754986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.755016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.755382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.755413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.755754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.755783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.756129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.756160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.756483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.756513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.756900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.756931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.757300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.276 [2024-11-20 08:31:19.757328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.276 qpair failed and we were unable to recover it. 00:34:15.276 [2024-11-20 08:31:19.757757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.757786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.758143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.758175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.758557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.758587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.758891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.758926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.759307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.759337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.759681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.759711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.760081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.760110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.760491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.760521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.760898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.760928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.761263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.761292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.761647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.761676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.761842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.761877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.762225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.762253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.762565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.762594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.762954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.762984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.763205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.763237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.763591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.763620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.763968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.763998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.764357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.764384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.764729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.764757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.765100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.765132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.765368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.765398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.765741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.765770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.766060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.766090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.766447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.766475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.766834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.766871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.767238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.767268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.767606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.767634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.767986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.768017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.768378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.768406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.768752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.768783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.277 [2024-11-20 08:31:19.769165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.277 [2024-11-20 08:31:19.769195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.277 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.769565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.769595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.769963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.769993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.770382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.770411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.770786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.770816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.771213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.771243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.771607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.771636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.771891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.771924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.772298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.772327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.772676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.772705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.773057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.773087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.773429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.773458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.773827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.773869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.774211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.774242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.774668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.774697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.775017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.775046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.775409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.775437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.775567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.775599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.776004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.776034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.776394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.776423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.776793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.776822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.777073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.777105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.777445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.777474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.777682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.777709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.778024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.778055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.778442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.778471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.778837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.778875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.779140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.779171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.779564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.278 [2024-11-20 08:31:19.779593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.278 qpair failed and we were unable to recover it. 00:34:15.278 [2024-11-20 08:31:19.779813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.779844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.780218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.780248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.780597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.780627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.780998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.781029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.781366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.781394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.781816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.781847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.782234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.782264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.782618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.782647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.783012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.783042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.783389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.783418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.783825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.783855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.784273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.784303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.784676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.784705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.784962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.784992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.785265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.785295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.785681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.785710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.785943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.785974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.786338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.786367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.786730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.786759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.787120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.787151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.787456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.787485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.787727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.787755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.788117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.788146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.788522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.788558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.788920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.788952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.789269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.789297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.789659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.789688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.789995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.790026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.790404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.790433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.790792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.790822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.791089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.791119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.279 [2024-11-20 08:31:19.791485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.279 [2024-11-20 08:31:19.791514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.279 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.791878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.791908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.792268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.792296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.792539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.792571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.792930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.792962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.793333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.793362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.793615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.793644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.794007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.794037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.794382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.794410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.794774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.794804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.795161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.795192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.795546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.795576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.795839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.795874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.796294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.796323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.796673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.796702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.797020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.797050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.797418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.797447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.797806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.797835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.798105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.798135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.798506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.798537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.798771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.798801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.799178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.799209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.799537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.799567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.799933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.799963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.800224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.800253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.800612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.800641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.800997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.801027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.801406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.801435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.801671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.801702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.802058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.802089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.802429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.802458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.802841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.802880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.280 [2024-11-20 08:31:19.803126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.280 [2024-11-20 08:31:19.803161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.280 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.803553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.803582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.803949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.803980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.804349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.804379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.804772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.804801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.805052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.805085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.805331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.805360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.805620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.805650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.805990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.806020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.806330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.806360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.806729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.806758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.807185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.807216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.807596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.807626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.807990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.808021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.808297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.808326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.808709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.808738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.809142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.809172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.809533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.809562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.809817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.809846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.810091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.810121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.810519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.810547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.810788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.810818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.811209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.811239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.811513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.811544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.811919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.811951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.812129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.812158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.812535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.812563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.812806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.812835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.813001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.813030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.813392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.813420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.813796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.813825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.281 [2024-11-20 08:31:19.814234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.281 [2024-11-20 08:31:19.814264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.281 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.814686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.814715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.815130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.815160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.815581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.815610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.815997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.816027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.816367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.816396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.816765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.816794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.817161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.817192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.817545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.817575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.817833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.817875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.818102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.818135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.818483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.818512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.818765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.818794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.819136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.819168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.819518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.819547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.819911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.819943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.820302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.820330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.820717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.820747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.821069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.821107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbeb8000b90 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.821234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2c020 is same with the state(6) to be set 00:34:15.282 [2024-11-20 08:31:19.821918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.821975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.822353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.822367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.822584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.822594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.823014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.823035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.823266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.823278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.823486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.823498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.823694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.823706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.823964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.823976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.824166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.824177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.824531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.824541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.824832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.824844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.825149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.825161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.825435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.825446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.825763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.825774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.826104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.826116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.826448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.826459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.826760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.826772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.826984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.826996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.827288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.827299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.282 qpair failed and we were unable to recover it. 00:34:15.282 [2024-11-20 08:31:19.827508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.282 [2024-11-20 08:31:19.827519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.827823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.827834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.828135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.828147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.828446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.828457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.828672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.828683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.829098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.829110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.829479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.829490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.829788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.829801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.830064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.830076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.830412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.830424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.830754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.830764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.831094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.831117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.831443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.831454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.831798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.831810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.832190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.832202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.832556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.832568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.832798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.832809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.833099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.833111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.833439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.833450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.833795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.833807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.834125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.834137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.834467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.834479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.834676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.834690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.834996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.835008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.283 [2024-11-20 08:31:19.835345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.283 [2024-11-20 08:31:19.835356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.283 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.835571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.835583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.835766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.835778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.836109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.836120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.836353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.836363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.836756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.836767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.837151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.837163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.837517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.837528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.837876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.837889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.838205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.838216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.838552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.838572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.838889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.838900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.839223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.839235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.839583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.839595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.840005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.840016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.840221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.840232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.840526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.840538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.840829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.840839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.841066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.841077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.841270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.841282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.841632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.841643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.841986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.841999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.842326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.842337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.842678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.842688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.842986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.842997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.843353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.843364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.843576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.843586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.843874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.843886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.844229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.844244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.844463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.844474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.844661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.844673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.845010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.845022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.845270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.845281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.845621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.845631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.845981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.845992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.846284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.846295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.846658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.846670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.847002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.284 [2024-11-20 08:31:19.847014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.284 qpair failed and we were unable to recover it. 00:34:15.284 [2024-11-20 08:31:19.847306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.847317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.847611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.847621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.847915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.847926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.848213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.848224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.848497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.848507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.848702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.848714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.849031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.849044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.849150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.849160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.849441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.849452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.849789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.849799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.850099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.850111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.850429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.850439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.850760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.850774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.851091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.851103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.851419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.851431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.851739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.851750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.852037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.852048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.852348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.852361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.852656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.852667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.852993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.853004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.853323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.853335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.853652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.853663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.853995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.854007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.855106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.855141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.855346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.855360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.855561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.855573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.855929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.855940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.856262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.856274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.856637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.856647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.856992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.857002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.857325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.857335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.857660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.857672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.857920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.857931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.858295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.858306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.858657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.285 [2024-11-20 08:31:19.858667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.285 qpair failed and we were unable to recover it. 00:34:15.285 [2024-11-20 08:31:19.859000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.859011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.859327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.859338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.859657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.859668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.860022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.860034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.860281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.860291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.860610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.860621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.860940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.860951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.861293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.861303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.861637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.861647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.861959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.861970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.862298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.862308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.862641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.862653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.863018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.863030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.863334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.863344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.863662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.863672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.864019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.864029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.864358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.864369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.864684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.864696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.865019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.865031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.865368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.865379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.865682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.865692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.866032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.866043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.866268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.866279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.866470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.866485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.866818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.866829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.867139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.867150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.867479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.867489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.867839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.867850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.868171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.868181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.868579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.868589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.868901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.868914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.869113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.286 [2024-11-20 08:31:19.869122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.286 qpair failed and we were unable to recover it. 00:34:15.286 [2024-11-20 08:31:19.869449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.869461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.869779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.869790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.869994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.870004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.870329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.870341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.870558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.870569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.870897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.870908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.871175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.871185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.871559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.871570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.871876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.871894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.872218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.872229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.872633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.872643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.872877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.872888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.873221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.873231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.873551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.873561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.873878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.873891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.874217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.874228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.874590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.874600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.874938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.874950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.875271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.875284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.875610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.875621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.875939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.875950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.876282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.876293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.876650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.876661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.876993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.877005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.877163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.877174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.877530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.877540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.877838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.877850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.878038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.878049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.878385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.878396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.878610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.878622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.878954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.878965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.879266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.879277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.879555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.879565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.879866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.879878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.880193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.880203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.880499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.880509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.880719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.880730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.881042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.881054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.287 qpair failed and we were unable to recover it. 00:34:15.287 [2024-11-20 08:31:19.881345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.287 [2024-11-20 08:31:19.881357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.881560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.881570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.881904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.881914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.882224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.882236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.882569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.882580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.882900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.882913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.883236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.883248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.883596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.883607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.883902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.883914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.884154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.884164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.884487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.884497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.884811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.884821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.885207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.885218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.885558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.885568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.885894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.885904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.886237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.886247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.886565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.886575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.886786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.886797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.886985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.886996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.887293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.887304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.887499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.887510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.887786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.887800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.888208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.888220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.888534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.888545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.888875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.888887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.889193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.889203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.889422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.889432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.889785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.288 [2024-11-20 08:31:19.889795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.288 qpair failed and we were unable to recover it. 00:34:15.288 [2024-11-20 08:31:19.890098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.890111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.890499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.890509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.890818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.890829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.891152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.891162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.891483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.891495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.891796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.891806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.892130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.892149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.892464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.892475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.892883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.892895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.893219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.893230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.893547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.893557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.893799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.893809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.894148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.894159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.894484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.894494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.894815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.894826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.895022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.895033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.896050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.896080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.896388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.896400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.896715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.896726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.897046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.897057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.897358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.897369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.897691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.897702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.898023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.898035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.898252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.898262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.898602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.898614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.898935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.898948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.289 [2024-11-20 08:31:19.899288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.289 [2024-11-20 08:31:19.899300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.289 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.899518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.899528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.899769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.899779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.900113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.900123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.900450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.900461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.900805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.900814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.901125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.901136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.901457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.901468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.901698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.901708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.902041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.902052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.902393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.902403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.902747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.902758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.903088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.903099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.903395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.903406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.903720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.903731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.904034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.904045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.904373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.904383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.904704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.904714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.904958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.904969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.905297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.905308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.905649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.905660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.905872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.905883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.906187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.906198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.906533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.906544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.906849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.906860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.907089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.907099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.907411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.907421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.907725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.907736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.908044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.908055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.290 qpair failed and we were unable to recover it. 00:34:15.290 [2024-11-20 08:31:19.908409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.290 [2024-11-20 08:31:19.908420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.908764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.908774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.909091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.909103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.909419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.909428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.909772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.909782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.910133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.910143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.910461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.910475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.910808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.910819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.911137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.911148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.911481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.911491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.911781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.911791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.912113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.912124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.912432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.912442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.912744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.912754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.913007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.913018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.913245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.913256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.913558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.913569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.913924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.913934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.914254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.914264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.914556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.914567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.914896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.914908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.915110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.915120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.915432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.915443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.915795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.915805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.916098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.916110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.916426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.916436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.916789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.916799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.917103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.917113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.917437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.917447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.917763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.917774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.918091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.918102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.918401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.918411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.918785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.918795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.919083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.919093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.919337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.919347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.919667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.919677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.920076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.920088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.920313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.920324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.291 qpair failed and we were unable to recover it. 00:34:15.291 [2024-11-20 08:31:19.920624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.291 [2024-11-20 08:31:19.920634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.920865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.920878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.921055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.921067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.921372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.921382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.921770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.921781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.922101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.922112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.922455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.922466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.922648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.922660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.922995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.923006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.923306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.923319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.923647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.923657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.924001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.924013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.924354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.924365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.924673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.924683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.925042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.925053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.925355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.925366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.925709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.925719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.926030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.926040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.926339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.926349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.926621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.926631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.926950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.926962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.927271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.927282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.927633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.927643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.927857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.927872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.928221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.928231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.928548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.928567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.928882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.928895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.929213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.929224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.929429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.929439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.929753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.929763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.930055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.930065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.930345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.930361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.930685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.930696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.931042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.931052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.931348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.931359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.931682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.931694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.932037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.932050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.292 [2024-11-20 08:31:19.932367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.292 [2024-11-20 08:31:19.932376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.292 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.932689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.932699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.933030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.933040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.933358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.933370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.933681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.933691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.934012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.934022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.934261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.934271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.934575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.934586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.934944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.934954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.935254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.935265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.935554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.935563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.935819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.935829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.936157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.936169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.936498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.936510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.936802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.936813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.937133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.937143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.937451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.937461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.937846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.937857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.938046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.938057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.938387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.938397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.938693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.938704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.939023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.939034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.939340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.939359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.939695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.939704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.940100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.940111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.940420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.940430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.940626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.940638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.940972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.940984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.941151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.941163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.941499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.941509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.941831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.941841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.942155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.942165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.942464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.942476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.942787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.942798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.943198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.943209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.943517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.943528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.943843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.943853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.944149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.944159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.944506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.293 [2024-11-20 08:31:19.944515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.293 qpair failed and we were unable to recover it. 00:34:15.293 [2024-11-20 08:31:19.944870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.944880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.945209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.945221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.945562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.945574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.945908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.945920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.946247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.946257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.946591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.946601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.946806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.946816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.947133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.947143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.947441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.947451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.947768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.947779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.948142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.948153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.948499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.948509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.948831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.948842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.949187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.949197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.949514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.949524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.949735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.949745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.949987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.949998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.950298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.950308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.950640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.950651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.951036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.951046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.951366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.951376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.951734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.951743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.952082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.952093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.952439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.952450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.952842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.952854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.953182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.953193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.953404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.953413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.953829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.953839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.954155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.954167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.954479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.954496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.954829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.954840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.955066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.955078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.294 [2024-11-20 08:31:19.955482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.294 [2024-11-20 08:31:19.955493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.294 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.955802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.955813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.956141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.956152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.956492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.956503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.956817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.956828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.956988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.956999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.957354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.957366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.957677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.957688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.957891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.957902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.958107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.958118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.958446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.958457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.958652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.958663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.958986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.958997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.959201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.959212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.959430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.959441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.959778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.959788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.960092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.960104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.960422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.960432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.960725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.960735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.961040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.961050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.961364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.961374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.961748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.961758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.962064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.962077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.962393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.962404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.962733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.962744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.963070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.963080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.963406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.963417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.963735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.963746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.964084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.964095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.964337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.964348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.964643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.964653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.964981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.964992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.965309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.965319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.965662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.965673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.965858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.965873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.966190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.966200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.966282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.966292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.966627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.966641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.295 [2024-11-20 08:31:19.966938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.295 [2024-11-20 08:31:19.966950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.295 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.967271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.967281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.967629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.967638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.967933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.967943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.968267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.968276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.968585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.968596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.968920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.968932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.969253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.969271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.969578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.969588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.969909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.969919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.970241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.970251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.970548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.970558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.970870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.970882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.971069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.971080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.971409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.971419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.971723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.971733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.972038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.972049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.972347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.972357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.972668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.972679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.972996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.973006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.973336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.973347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.973668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.973678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.974073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.974083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.974406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.974416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.974753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.974763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.975077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.975087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.975430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.975442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.975763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.975782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.976110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.976121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.976501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.976511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.976918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.976928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.977207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.977217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.977521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.977530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.977846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.977858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.978067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.978078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.978386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.978398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.978724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.978751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.979090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.979111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.979425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.296 [2024-11-20 08:31:19.979438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.296 qpair failed and we were unable to recover it. 00:34:15.296 [2024-11-20 08:31:19.979752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.979764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.297 [2024-11-20 08:31:19.980075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.980086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.297 [2024-11-20 08:31:19.980291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.980300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.297 [2024-11-20 08:31:19.980611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.980622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.297 [2024-11-20 08:31:19.980933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.980943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.297 [2024-11-20 08:31:19.981075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.981084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.297 [2024-11-20 08:31:19.981343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.981353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.297 [2024-11-20 08:31:19.981591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.981601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.297 [2024-11-20 08:31:19.981938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.981949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.297 [2024-11-20 08:31:19.982137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.982147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.297 [2024-11-20 08:31:19.982481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.982491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.297 [2024-11-20 08:31:19.982713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.982723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.297 [2024-11-20 08:31:19.983023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.983034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.297 [2024-11-20 08:31:19.983347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.983357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.297 [2024-11-20 08:31:19.983569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.983579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.297 [2024-11-20 08:31:19.983931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.297 [2024-11-20 08:31:19.983941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.297 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.984264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.984277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.984592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.984605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.984890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.984901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.985251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.985261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.985587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.985598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.985926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.985936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.986269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.986279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.986612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.986623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.986917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.986928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.987223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.987235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.987447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.987457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.987771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.987780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.988105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.988118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.988460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.988470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.988655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.988665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.573 [2024-11-20 08:31:19.989028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.573 [2024-11-20 08:31:19.989039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.573 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.989348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.989359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.989574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.989585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.989924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.989934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.990228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.990238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.990568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.990578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.990894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.990905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.991218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.991228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.991457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.991467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.991842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.991852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.992159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.992170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.992544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.992554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.992888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.992899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.993220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.993230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.993559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.993569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.993847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.993857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.994175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.994185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.994383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.994393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.994734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.994744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.995095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.995107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.995457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.995467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.995636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.995646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.995990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.996000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.996297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.996308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.996577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.996587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.996901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.996912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.997229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.997238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.997559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.997569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.997938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.997949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.998219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.998229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.998558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.998568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.998924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.998935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.999150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.999160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.999475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.999485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.999705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.999715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:19.999914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:19.999924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:20.000320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:20.000330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:20.000651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.574 [2024-11-20 08:31:20.000662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.574 qpair failed and we were unable to recover it. 00:34:15.574 [2024-11-20 08:31:20.001001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.001012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.001311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.001322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.001655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.001665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.001975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.002498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.003002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.003015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.003371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.003382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.003735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.003745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.004002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.004013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.004297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.004308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.004545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.004556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.004893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.004904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.005268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.005279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.005669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.005680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.005956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.005967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.009179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.009204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.009413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.009436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.009769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.009781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.010102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.010114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.010427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.010438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.010715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.010725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.011040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.011052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.011255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.011268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.011581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.011591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.011885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.011896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.012009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.012021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.012287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.012297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.012596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.012607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.012939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.012954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.013236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.013247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.013398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.013408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.013766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.013777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.014088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.014099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.014424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.014435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.014719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.014730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.014947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.014959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.015291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.015302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.575 [2024-11-20 08:31:20.015590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.575 [2024-11-20 08:31:20.015602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.575 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.015999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.016010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.016202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.016213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.016547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.016558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.016971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.016982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.017125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.017135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.017352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.017363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.017715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.017726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.017995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.018006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.018196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.018206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.018448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.018457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.018657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.018667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.019000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.019011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.019205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.019215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.019412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.019422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.019792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.019801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.019996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.020006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.020327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.020338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.020677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.020687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.020992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.021003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.021313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.021323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.021682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.021691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.022024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.022037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.022376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.022386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.022583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.022593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.022918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.022930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.023257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.023268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.023557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.023567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.023913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.023924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.024252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.024263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.024624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.024634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.024932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.024943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.025269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.025280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.025596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.025607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.025906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.025917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.026235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.576 [2024-11-20 08:31:20.026246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.576 qpair failed and we were unable to recover it. 00:34:15.576 [2024-11-20 08:31:20.026530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.026540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.026840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.026851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.027175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.027186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.027492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.027503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.027804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.027814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.028007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.028024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.028315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.028325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.028624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.028635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.028939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.028951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.029265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.029276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.029558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.029569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.029783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.029793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.030091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.030102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.030391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.030401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.030725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.030736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.031050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.031062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.031376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.031386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.031671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.031682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.031959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.031970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.032299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.032309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.032591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.032603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.032936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.032946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.033238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.033250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.033596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.033610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.033916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.033927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.034150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.034161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.034476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.034487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.034707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.034718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.035050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.035061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.035387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.035397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.035779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.035790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.036049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.036060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.036384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.036395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.036734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.036745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.036995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.037006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.037349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.037360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.037659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.037669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.037971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.037982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.577 qpair failed and we were unable to recover it. 00:34:15.577 [2024-11-20 08:31:20.038195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.577 [2024-11-20 08:31:20.038206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 [2024-11-20 08:31:20.038504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.578 [2024-11-20 08:31:20.038514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 [2024-11-20 08:31:20.038848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.578 [2024-11-20 08:31:20.038858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 [2024-11-20 08:31:20.039108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.578 [2024-11-20 08:31:20.039119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 [2024-11-20 08:31:20.039243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.578 [2024-11-20 08:31:20.039254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Write completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Write completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Write completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Write completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Write completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Write completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Write completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Write completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Write completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Write completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Write completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Write completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 Read completed with error (sct=0, sc=8) 00:34:15.578 starting I/O failed 00:34:15.578 [2024-11-20 08:31:20.039483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.578 [2024-11-20 08:31:20.039788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.578 [2024-11-20 08:31:20.039806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 [2024-11-20 08:31:20.040275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.578 [2024-11-20 08:31:20.040310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 [2024-11-20 08:31:20.040562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.578 [2024-11-20 08:31:20.040572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 [2024-11-20 08:31:20.040765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.578 [2024-11-20 08:31:20.040774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 [2024-11-20 08:31:20.041021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.578 [2024-11-20 08:31:20.041030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 [2024-11-20 08:31:20.041263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.578 [2024-11-20 08:31:20.041272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 [2024-11-20 08:31:20.041623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.578 [2024-11-20 08:31:20.041631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 [2024-11-20 08:31:20.041831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.578 [2024-11-20 08:31:20.041841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 [2024-11-20 08:31:20.042027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.578 [2024-11-20 08:31:20.042037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 [2024-11-20 08:31:20.042361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.578 [2024-11-20 08:31:20.042369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 [2024-11-20 08:31:20.042703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.578 [2024-11-20 08:31:20.042711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.578 qpair failed and we were unable to recover it. 00:34:15.578 [2024-11-20 08:31:20.043034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.043043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.043376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.043384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.043710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.043717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.044034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.044042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.044248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.044255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.044538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.044546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.044758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.044766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.044857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.044868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.045078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.045086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.045502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.045510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.045891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.045899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.046252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.046259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.046441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.046449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.046681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.046688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.046858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.046870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.047247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.047255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.047538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.047548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.048305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.048314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.048419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.048426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.048651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.048659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.048880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.048888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.049067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.049076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.049390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.049398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.049713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.049720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.050056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.050064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.050382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.050389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.050697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.050705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.050875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.050884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.051256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.051264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.051584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.051591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.051902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.051911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.052253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.052262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.052424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.052431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.052722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.052729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.053044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.053051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.053398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.579 [2024-11-20 08:31:20.053405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.579 qpair failed and we were unable to recover it. 00:34:15.579 [2024-11-20 08:31:20.053617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.053625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.053828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.053837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.054032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.054040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.054373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.054381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.054685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.054693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.055028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.055036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.055350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.055358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.055673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.055680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.055883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.055891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.056240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.056247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.056554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.056562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.056772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.056780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.056995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.057002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.057289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.057296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.057628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.057635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.057833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.057841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.058169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.058176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.058379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.058387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.058592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.058600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.058994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.059001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.059306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.059317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.059563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.059570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.059780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.059787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.059854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.059861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.060002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.060010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.060212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.060219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.060552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.060560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.060878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.060886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.061104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.061111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.061289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.061306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.061675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.061682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.061985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.061992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.062112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.062118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.062334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.062341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.062623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.062631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.062838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.062846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.063173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.063181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.063491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.580 [2024-11-20 08:31:20.063498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.580 qpair failed and we were unable to recover it. 00:34:15.580 [2024-11-20 08:31:20.063615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.063624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.063841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.063850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.064198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.064206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.064541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.064548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.064727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.064735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.064806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.064815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.065040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.065048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.065233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.065242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.065439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.065447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.065767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.065775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.066089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.066098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.066422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.066431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.066797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.066805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.067016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.067025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.067209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.067217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.067578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.067587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.067653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.067661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.067967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.067975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.068186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.068193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.068657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.068664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.068931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.068938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.069153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.069160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.069458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.069467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.069718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.069725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.070031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.070039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.070276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.070283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.070500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.070508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.070579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.070585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.070904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.070913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.071150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.071157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.071365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.071372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.071689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.071697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.072048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.072056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.072299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.072306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.072538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.072545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.072860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.072870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.073205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.073212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.073613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.581 [2024-11-20 08:31:20.073621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.581 qpair failed and we were unable to recover it. 00:34:15.581 [2024-11-20 08:31:20.073729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.073736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.074016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.074023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.074344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.074352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.074672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.074679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.074892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.074900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.075128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.075136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.075350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.075357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.075648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.075655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.075828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.075836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.076170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.076178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.076377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.076386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.076707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.076715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.077055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.077063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.077391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.077399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.077717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.077724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.078025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.078033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.078228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.078235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.078583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.078591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.078923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.078931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.079008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.079014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.079224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.079231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.079538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.079545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.079889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.079897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.080226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.080233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.080424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.080433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.080775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.080782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.080844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.080851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.081055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.081063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.081342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.081350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.081577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.081584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.081746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.081753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.082113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.082121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.082305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.082313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.082662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.082670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.082894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.082903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.083214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.083222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.083411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.083419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.582 [2024-11-20 08:31:20.083724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.582 [2024-11-20 08:31:20.083731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.582 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.084065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.084073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.084411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.084418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.084583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.084591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.084936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.084943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.085269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.085276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.085628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.085635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.085838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.085845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.086161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.086169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.086477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.086484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.086683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.086691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.087001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.087009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.087198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.087205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.087510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.087517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.087722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.087729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.087988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.087996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.088117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.088124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.088312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.088320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.088654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.088662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.088993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.089000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.089359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.089367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.089554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.089562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.089786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.089794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.090109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.090116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.090448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.090456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.090754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.090761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.091063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.091071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.091443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.091452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.091661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.091668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.091885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.091893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.092209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.092216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.092387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.092394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.092575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.583 [2024-11-20 08:31:20.092582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.583 qpair failed and we were unable to recover it. 00:34:15.583 [2024-11-20 08:31:20.092888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.092896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.093236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.093243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.093428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.093435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.093752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.093758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.094081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.094089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.094419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.094426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.094714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.094722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.094804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.094812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.095093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.095101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.095406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.095413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.095750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.095758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.095951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.095960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.096245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.096254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.096457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.096465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.096622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.096630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.096811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.096819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.097099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.097109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.097278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.097287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.097463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.097471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.097778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.097787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.098223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.098232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.098540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.098549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.098717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.098726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.098917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.098925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.099219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.099226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.099300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.099307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.099458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.099466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.099732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.099739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.100033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.100041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.100375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.100382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.100658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.100666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.101021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.101028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.101224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.101231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.101450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.101464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.101764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.101773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.101994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.102001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.102343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.102350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.584 [2024-11-20 08:31:20.102660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.584 [2024-11-20 08:31:20.102668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.584 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.102893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.102900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.103136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.103143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.103488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.103496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.103675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.103683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.103989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.103997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.104307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.104314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.104657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.104664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.105069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.105076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.105380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.105387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.105696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.105704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.105886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.105894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.106059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.106067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.106210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.106218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.106546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.106553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.106875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.106882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.107220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.107227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.107580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.107587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.107790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.107797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.108041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.108048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.108230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.108237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.108594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.108601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.108778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.108785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.109098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.109106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.109424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.109431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.109751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.109759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.110107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.110115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.110331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.110338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.110429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.110436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.110739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.110746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.111073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.111081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.111388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.111395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.111714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.111722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.112068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.112076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.112362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.112371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.112672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.112679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.112997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.113004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.113327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.113336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.585 [2024-11-20 08:31:20.113409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.585 [2024-11-20 08:31:20.113416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.585 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.113685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.113692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.114147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.114154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.114451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.114464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.114785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.114793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.115103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.115111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.115385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.115393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.115594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.115602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.115945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.115952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.116292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.116299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.116482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.116489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.116841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.116848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.117053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.117061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.117404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.117412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.117621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.117627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.117905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.117913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.118147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.118154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.118291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.118298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.118622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.118630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.118957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.118964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.119264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.119272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.119629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.119636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.119845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.119852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.120156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.120165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.120488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.120496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.120960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.120967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.121305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.121313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.121626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.121633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.121948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.121956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.122246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.122253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.122547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.122554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.122768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.122776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.123086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.123093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.123283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.123290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.123657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.123664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.123829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.123836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.124066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.124074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.124389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.124397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.586 qpair failed and we were unable to recover it. 00:34:15.586 [2024-11-20 08:31:20.124708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.586 [2024-11-20 08:31:20.124716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.125062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.125072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.125409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.125416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.125714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.125722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.126032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.126040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.126335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.126342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.126738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.126745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.127060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.127067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.127277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.127284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.127582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.127591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.127771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.127779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.128057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.128065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.128331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.128338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.128523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.128530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.128756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.128763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.128971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.128980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.129309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.129316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.129457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.129464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.129696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.129704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.129890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.129898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.130201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.130208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.130400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.130407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.130702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.130709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.130897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.130904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.131198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.131206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.131491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.131500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.131679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.131688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.131995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.132003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.132272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.587 [2024-11-20 08:31:20.132280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.587 qpair failed and we were unable to recover it. 00:34:15.587 [2024-11-20 08:31:20.132447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.132455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.132763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.132770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.132916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.132924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.133317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.133324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.133673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.133680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.133857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.133866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.134043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.134049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.134255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.134262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.134588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.134595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.134884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.134892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.134940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.134947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.135291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.135299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.135605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.135615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.135906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.135914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.136195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.136202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.136415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.136423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.136729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.136736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.137115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.137122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.137413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.137420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.137759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.137766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.137933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.137940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.138308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.138315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.138604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.138612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.138919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.138926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.139145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.139152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.139361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.139369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.139678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.139686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.139967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.139975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.140364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.140372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.588 [2024-11-20 08:31:20.140570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.588 [2024-11-20 08:31:20.140577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.588 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.140875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.140883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.141182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.141189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.141511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.141518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.141714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.141721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.141901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.141908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.142258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.142266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.142574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.142581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.142778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.142785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.143099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.143106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.143317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.143324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.143691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.143698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.143999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.144008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.144313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.144320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.144655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.144662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.144971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.144979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.145372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.145379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.145559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.145566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.145897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.145904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.146220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.146233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.146509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.146516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.146812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.146820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.147156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.147164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.147378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.147388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.147683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.147691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.148003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.148011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.148361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.148368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.148526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.148532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.148823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.148830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.149135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.149142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.149441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.149449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.149751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.149757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.150080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.150087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.150381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.150388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.589 [2024-11-20 08:31:20.150663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.589 [2024-11-20 08:31:20.150670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.589 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.150882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.150890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.151190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.151197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.151481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.151489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.151857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.151866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.152178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.152186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.152523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.152529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.152700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.152707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.152997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.153004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.153321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.153328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.153502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.153509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.153803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.153811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.154019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.154026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.154334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.154341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.154647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.154654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.154754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.154761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.155037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.155045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.155217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.155225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.155570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.155577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.155884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.155892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.156217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.156223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.156509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.156516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.156685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.156701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.157001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.157008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.157183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.157190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.157557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.157564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.157865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.157873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.158163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.158170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.158470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.158477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.158761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.158769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.159087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.159095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.159377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.159384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.159719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.159727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.160057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.160064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.160357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.160365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.160578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.160585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.160967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.160974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.161300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.161307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.590 [2024-11-20 08:31:20.161597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.590 [2024-11-20 08:31:20.161605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.590 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.161897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.161904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.162076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.162084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.162282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.162289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.162569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.162576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.162856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.162865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.163192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.163199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.163528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.163535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.163791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.163798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.163991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.163999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.164403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.164410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.164690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.164697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.164901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.164908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.165227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.165234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.165617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.165624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.165916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.165923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.166245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.166252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.166444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.166452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.166653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.166661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.166818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.166825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.167009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.167016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.167238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.167245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.167585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.167593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.167879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.167887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.168217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.168224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.168516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.168524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.168694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.168701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.169038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.169046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.169375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.169381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.169664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.169672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.169992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.169999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.170284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.170291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.170637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.170645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.170819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.170826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.171197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.171204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.171502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.171510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.171703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.171710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.171995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.172002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.591 qpair failed and we were unable to recover it. 00:34:15.591 [2024-11-20 08:31:20.172324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.591 [2024-11-20 08:31:20.172331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.172638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.172645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.172948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.172956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.173274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.173281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.173564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.173571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.173936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.173944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.174150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.174157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.174456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.174463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.174797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.174803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.175082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.175089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.175375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.175382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.175561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.175569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.175902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.175910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.176260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.176267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.176571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.176578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.176844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.176852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.177029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.177038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.177346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.177353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.177555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.177562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.177933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.177940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.178240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.178250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.178555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.178562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.178895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.178902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.179241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.179248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.179534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.179541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.179853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.179861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.180198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.180204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.180544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.180552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.180845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.180853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.181019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.181026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.181323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.181330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.181627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.181635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.592 [2024-11-20 08:31:20.181807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.592 [2024-11-20 08:31:20.181814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.592 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.182099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.182107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.182394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.182402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.182687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.182695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.183007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.183015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.183299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.183306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.183599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.183606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.183913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.183920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.184239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.184246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.184553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.184560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.184841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.184848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.185213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.185220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.185461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.185468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.185749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.185756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.186076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.186083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.186400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.186407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.186570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.186578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.186909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.186916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.187235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.187242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.187568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.187575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.187739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.187746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.188125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.188132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.188446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.188453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.188754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.188760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.189062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.189070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.189381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.189388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.189693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.189702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.189991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.189998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.190288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.190298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.190484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.190493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.190659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.190667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.191001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.191008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.191316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.191323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.191539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.191546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.191746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.191753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.192084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.192091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.192398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.192405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.192718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.192725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.593 qpair failed and we were unable to recover it. 00:34:15.593 [2024-11-20 08:31:20.193012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.593 [2024-11-20 08:31:20.193019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.193306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.193313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.193477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.193485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.193819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.193826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.194173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.194180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.194477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.194484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.194645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.194653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.194993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.195000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.195299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.195307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.195607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.195614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.195922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.195930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.196292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.196298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.196616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.196623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.196913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.196921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.197243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.197250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.197575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.197582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.197891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.197898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.198218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.198225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.198564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.198571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.198746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.198754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.198913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.198921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.199174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.199181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.199495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.199502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.199698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.199706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.200006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.200013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.200233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.200240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.200548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.200555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.200870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.200877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.201168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.201175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.201374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.201381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.201720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.201729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.202023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.202031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.202340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.202347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.202660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.202667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.202970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.202977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.203281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.203289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.203594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.203601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.203920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.594 [2024-11-20 08:31:20.203928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.594 qpair failed and we were unable to recover it. 00:34:15.594 [2024-11-20 08:31:20.204237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.204244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.204556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.204563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.204881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.204888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.205207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.205214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.205532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.205538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.205852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.205859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.206063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.206071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.206404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.206411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.206704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.206711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.207022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.207030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.207360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.207366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.207750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.207758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.208043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.208050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.208220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.208227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.208448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.208455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.208769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.208777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.209089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.209096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.209417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.209424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.209748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.209756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.209981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.209989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.210318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.210326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.210488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.210496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.210757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.210765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.211094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.211102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.211417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.211425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.211607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.211615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.211895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.211903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.212221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.212228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.212389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.212396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.212789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.212796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.213146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.213153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.213362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.213369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.213650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.213659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.213968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.213976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.214322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.214329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.214622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.214629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.214839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.214847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.215219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.595 [2024-11-20 08:31:20.215226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.595 qpair failed and we were unable to recover it. 00:34:15.595 [2024-11-20 08:31:20.215576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.215584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.215898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.215906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.216212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.216219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.216517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.216524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.216838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.216845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.217150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.217157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.217454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.217468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.217773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.217780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.218086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.218094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.218402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.218409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.218693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.218700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.219018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.219025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.219327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.219340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.219493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.219500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.219774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.219781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.220104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.220112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.220461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.220468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.220665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.220672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.221026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.221033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.221363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.221370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.221685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.221692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.221992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.222001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.222197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.222205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.222453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.222459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.222764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.222772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.223084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.223092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.223395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.223402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.223691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.223698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.223894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.223901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.224171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.224179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.224533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.224540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.224828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.224835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.225044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.225051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.225330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.225337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.225684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.225693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.225865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.225873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.226202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.226209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.226537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.596 [2024-11-20 08:31:20.226545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.596 qpair failed and we were unable to recover it. 00:34:15.596 [2024-11-20 08:31:20.226853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.226861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.227076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.227083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.227367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.227374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.227691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.227698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.228003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.228011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.228325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.228332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.228538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.228545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.228895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.228903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.229210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.229217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.229415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.229423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.229725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.229732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.230034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.230042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.230353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.230360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.230565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.230572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.230907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.230915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.231294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.231301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.231520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.231527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.231787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.231795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.232102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.232110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.232404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.232411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.232702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.232709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.233028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.233036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.233342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.233349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.233664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.233671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.234005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.234012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.234366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.234372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.234698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.234705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.234888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.234896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.235223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.235230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.235534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.235541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.235851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.235858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.236214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.236221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.236524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.597 [2024-11-20 08:31:20.236531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.597 qpair failed and we were unable to recover it. 00:34:15.597 [2024-11-20 08:31:20.236851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.236857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.237193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.237200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.237531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.237538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.237846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.237854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.238097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.238104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.238483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.238490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.238777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.238785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.238968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.238975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.239166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.239172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.239464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.239471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.239660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.239668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.239965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.239973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.240284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.240292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.240597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.240604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.240943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.240951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.241262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.241269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.241559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.241567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.241755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.241762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.242034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.242042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.242396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.242402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.242693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.242706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.243011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.243018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.243332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.243339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.243531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.243538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.243860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.243872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.244274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.244281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.244622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.244629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.244832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.244839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.245160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.245168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.245471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.245478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.245794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.245801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.246127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.246134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.246420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.246427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.246738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.246745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.246943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.246951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.247217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.247225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.247476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.247482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.598 [2024-11-20 08:31:20.247798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.598 [2024-11-20 08:31:20.247805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.598 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.248119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.248126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.248279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.248287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.248547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.248561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.248876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.248883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.249230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.249238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.249586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.249595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.249910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.249917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.250099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.250106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.250480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.250486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.250774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.250781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.250982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.250989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.251295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.251302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.251485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.251494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.251802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.251808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.251961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.251968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.252262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.252269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.252557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.252564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.252871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.252879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.253241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.253248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.253563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.253570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.253877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.253885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.254092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.254099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.254430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.254437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.254747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.254753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.255063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.255071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.255378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.255385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.255567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.255574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.255933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.255940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.256114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.256122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.256398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.256406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.256751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.256757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.257049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.257057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.257346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.257353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.257514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.257522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.257797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.257805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.257973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.257981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.258408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.258415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.599 [2024-11-20 08:31:20.258610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.599 [2024-11-20 08:31:20.258617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.599 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.258941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.258948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.259240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.259247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.259568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.259574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.259860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.259875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.260158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.260165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.260473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.260480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.260801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.260808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.261144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.261153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.261348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.261355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.261729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.261736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.262037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.262044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.262364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.262371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.262681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.262688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.262998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.263006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.263316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.263323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.263638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.263645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.263927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.263934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.264231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.264239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.264552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.264558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.264765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.264772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.265002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.265009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.265310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.265317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.265521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.265528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.265840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.265846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.266169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.266177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.266507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.266513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.266825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.266832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.267134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.267141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.267438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.267445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.267752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.267759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.267964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.267972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.268251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.268258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.268579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.268586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.268875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.268883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.269180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.269195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.269379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.269387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.269706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.269713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.600 [2024-11-20 08:31:20.270034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.600 [2024-11-20 08:31:20.270041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.600 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.270205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.270212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.270492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.270499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.270721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.270729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.271040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.271048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.271356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.271363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.271661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.271668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.271975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.271982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.272288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.272296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.272599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.272606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.272916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.272925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.273138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.273145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.273453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.273460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.273746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.273754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.273939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.273946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.274129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.274136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.274450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.274457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.274760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.274768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.275053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.275060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.275380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.275387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.275688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.275695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.275885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.275893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.276214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.276221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.276548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.276555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.276860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.276870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.277065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.277072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.277420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.277427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.277725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.277732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.277933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.277941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.278209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.278217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.278528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.278536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.278913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.278921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.279138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.279145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.279473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.601 [2024-11-20 08:31:20.279481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.601 qpair failed and we were unable to recover it. 00:34:15.601 [2024-11-20 08:31:20.279876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.279884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.602 [2024-11-20 08:31:20.280195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.280202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.602 [2024-11-20 08:31:20.280498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.280505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.602 [2024-11-20 08:31:20.280842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.280849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.602 [2024-11-20 08:31:20.281008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.281016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.602 [2024-11-20 08:31:20.281311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.281319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.602 [2024-11-20 08:31:20.281637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.281644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.602 [2024-11-20 08:31:20.281903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.281910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.602 [2024-11-20 08:31:20.282223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.282229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.602 [2024-11-20 08:31:20.282550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.282557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.602 [2024-11-20 08:31:20.282868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.282875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.602 [2024-11-20 08:31:20.283085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.283092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.602 [2024-11-20 08:31:20.283500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.283507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.602 [2024-11-20 08:31:20.283672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.283679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.602 [2024-11-20 08:31:20.283892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.283900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.602 [2024-11-20 08:31:20.284081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.602 [2024-11-20 08:31:20.284088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.602 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.284266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.284277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.284364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.284372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.284732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.284739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.284907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.284914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.285216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.285224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.285498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.285505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.285728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.285735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.286035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.286042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.286339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.286355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.286548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.286555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.286845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.286852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.287235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.287242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.287577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.287585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.287886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.287895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.288293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.288300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.288588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.288596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.288877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.288884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.289206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.289213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.289540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.289547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.289871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.289879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.290253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.290260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.290561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.290568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.290884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.290891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.291253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.291259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.291539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.291546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.291865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.291874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.292223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.292230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.292433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.292440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.292720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.292727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.293040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.891 [2024-11-20 08:31:20.293049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.891 qpair failed and we were unable to recover it. 00:34:15.891 [2024-11-20 08:31:20.293366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.293374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.293708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.293716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.294053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.294060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.294374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.294382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.294605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.294612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.294882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.294890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.295210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.295217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.295302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.295308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.295603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.295611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.295899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.295907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.296227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.296237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.296547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.296553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.296871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.296879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.297206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.297213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.297407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.297413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.297716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.297723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.298044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.298051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.298241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.298248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.298608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.298615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.298909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.298916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.299225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.299231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.299530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.299538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.299864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.299871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.300058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.300065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.300408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.300415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.300761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.300768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.300952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.300959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.301163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.301170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.301504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.301512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.301810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.301818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.302029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.302037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.302373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.302380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.302685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.302692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.302892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.302900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.303181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.303188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.303341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.303349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.892 [2024-11-20 08:31:20.303733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.892 [2024-11-20 08:31:20.303740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.892 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.304030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.304038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.304218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.304226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.304526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.304533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.304864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.304872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.305227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.305234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.305518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.305525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.305872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.305880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.305920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.305928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.306217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.306225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.306429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.306436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.306734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.306742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.306996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.307005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.307343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.307351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.307651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.307661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.307846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.307854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.308186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.308194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.308430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.308437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.308744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.308751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.309051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.309059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.309371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.309381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.309706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.309714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.310029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.310036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.310138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.310145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.310381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.310388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.310596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.310604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.310801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.310808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.311001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.311008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.311336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.311343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.311674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.311681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.312002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.312009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.312211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.312218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.312533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.312540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.312819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.312826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.313140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.313148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.313461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.313468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.313794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.313801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.314020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.314028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.893 qpair failed and we were unable to recover it. 00:34:15.893 [2024-11-20 08:31:20.314286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.893 [2024-11-20 08:31:20.314293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.314571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.314578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.314870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.314878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.315177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.315185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.315389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.315397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.315701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.315709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.316010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.316017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.316371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.316377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.316699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.316706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.316980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.316987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.317301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.317309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.317629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.317636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.317947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.317955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.318308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.318316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.318614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.318622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.318904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.318911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.319243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.319256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.319567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.319574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.319845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.319853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.320058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.320065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.320100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.320107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.320384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.320391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.320714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.320722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.320919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.320926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.321194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.321201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.321518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.321526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.321840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.321847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.322021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.322030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.322306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.322313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.322653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.322660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.322891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.322898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.323184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.323191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.323401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.323408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.323715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.323723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.324036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.324046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.324378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.324386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.324595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.324602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.324870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.894 [2024-11-20 08:31:20.324878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.894 qpair failed and we were unable to recover it. 00:34:15.894 [2024-11-20 08:31:20.325214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.325221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.325543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.325551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.325711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.325719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.326085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.326092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.326388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.326395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.326720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.326730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.326791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.326798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.327030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.327038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.327312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.327320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.327620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.327627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.327955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.327963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.328256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.328264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.328588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.328596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.328893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.328901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.328990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.328997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.329293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.329301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.329598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.329605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.329896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.329904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.330195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.330202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.330518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.330526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.330718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.330726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.330908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.330916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.331237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.331245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.331549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.331556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.331914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.331921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.332248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.332255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.332563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.332570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.332850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.332857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.333145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.333153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.333452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.333461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.333774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.895 [2024-11-20 08:31:20.333782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.895 qpair failed and we were unable to recover it. 00:34:15.895 [2024-11-20 08:31:20.334072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.334080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.334241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.334249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.334419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.334427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.334709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.334717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.335028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.335036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.335335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.335343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.335682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.335689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.336010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.336018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.336313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.336320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.336492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.336499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.336687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.336694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.337003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.337010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.337299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.337307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.337617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.337624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.337940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.337948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.338213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.338220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.338533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.338540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.338846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.338853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.339153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.339161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.339347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.339354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.339566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.339574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.339740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.339747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.340062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.340070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.340417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.340425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.340693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.340701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.340913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.340920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.341231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.341238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.341545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.341553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.341893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.341901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.342285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.342292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.342606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.342613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.342881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.342889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.343166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.343173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.343431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.343438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.343761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.343769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.344101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.344109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.344411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.344419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.344731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.896 [2024-11-20 08:31:20.344739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.896 qpair failed and we were unable to recover it. 00:34:15.896 [2024-11-20 08:31:20.344912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.344920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.345215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.345223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.345564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.345572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.345893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.345901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.346212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.346219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.346527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.346534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.346855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.346865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.347037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.347044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.347415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.347422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.347614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.347621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.347965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.347972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.348291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.348298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.348487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.348494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.348790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.348797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.348975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.348983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.349251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.349258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.349565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.349574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.349882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.349889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.350201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.350208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.350546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.350554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.350728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.350736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.351030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.351037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.351253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.351260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.351576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.351584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.351774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.351782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.352074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.352082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.352397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.352404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.352584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.352592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.352897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.352906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.353179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.353186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.353406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.353413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.353732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.353738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.353927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.353935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.354283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.354290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.354453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.354461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.354735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.354743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.355062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.355070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.897 [2024-11-20 08:31:20.355427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.897 [2024-11-20 08:31:20.355434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.897 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.355742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.355749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.356084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.356091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.356380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.356388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.356573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.356580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.356902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.356909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.357317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.357324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.357516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.357523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.357833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.357841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.358154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.358161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.358455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.358463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.358697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.358704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.359027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.359035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.359389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.359396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.359672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.359679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.359987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.359995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.360215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.360222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.360428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.360435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.360740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.360747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.360934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.360943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.361145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.361152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.361335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.361342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.361497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.361504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.361718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.361725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.361806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.361813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.362122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.362129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.362429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.362436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.362732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.362740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.363142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.363149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.363307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.363315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.363613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.363620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.363938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.363946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.364109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.364117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.364198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.364206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.364501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.364509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.364889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.364897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.365081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.365088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.365305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.365312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.898 [2024-11-20 08:31:20.365636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.898 [2024-11-20 08:31:20.365642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.898 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.365940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.365948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.366126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.366133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.366446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.366452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.366754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.366761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.367120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.367128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.367435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.367441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.367734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.367747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.368049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.368057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.368352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.368359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.368640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.368647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.368855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.368864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.369170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.369176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.369495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.369502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.369673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.369680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.370023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.370030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.370341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.370348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.370653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.370660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.370977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.370984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.371268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.371275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.371570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.371576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.371769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.371780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.372067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.372074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.372357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.372364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.372656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.372663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.373035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.373043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.373338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.373345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.373645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.373652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.373984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.373991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.374250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.374257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.374563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.374570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.374880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.374887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.375165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.375172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.375472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.375480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.375773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.375780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.375996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.376003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.376301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.376308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.376589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.376596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.899 [2024-11-20 08:31:20.376885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.899 [2024-11-20 08:31:20.376892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.899 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.377084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.377091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.377343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.377351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.377722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.377729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.378064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.378071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.378248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.378256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.378441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.378448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.378768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.378775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.379123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.379130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.379326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.379333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.379628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.379635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.379922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.379929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.380136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.380143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.380462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.380469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.380629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.380637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.380925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.380933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.381249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.381256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.381547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.381553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.381732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.381740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.382026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.382033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.382324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.382332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.382524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.382532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.382824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.382832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.383131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.383141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.383429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.383437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.383711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.383719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.384036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.384044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.384335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.384343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.384626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.384632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.384915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.384922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.385197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.385204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.385408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.900 [2024-11-20 08:31:20.385415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.900 qpair failed and we were unable to recover it. 00:34:15.900 [2024-11-20 08:31:20.385710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.385717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.386000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.386007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.386175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.386189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.386476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.386483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.386675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.386682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.386973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.386980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.387288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.387295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.387581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.387588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.387892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.387900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.388164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.388171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.388458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.388465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.388757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.388763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.389086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.389093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.389372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.389379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.389682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.389689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.389984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.389992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.390309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.390316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.390557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.390564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.390902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.390909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.391209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.391215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.391484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.391491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.391786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.391792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.392093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.392101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.392395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.392402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.392559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.392566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.392787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.392794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.393084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.393091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.393433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.393441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.393753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.393760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.394060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.394073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.394380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.394387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.394760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.394768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.395086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.395094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.395449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.395455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.395747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.395755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.396035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.396042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.396372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.396380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.396701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.901 [2024-11-20 08:31:20.396707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.901 qpair failed and we were unable to recover it. 00:34:15.901 [2024-11-20 08:31:20.397063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.397070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.397379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.397386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.397669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.397676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.397978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.397985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.398290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.398297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.398463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.398470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.398772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.398779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.399161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.399168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.399480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.399487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.399775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.399782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.400061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.400068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.400396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.400404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.400686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.400694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.401009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.401016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.401214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.401222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 Write completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Write completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Write completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Write completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Write completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Write completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Write completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Write completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Write completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Write completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Write completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Write completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Write completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Read completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Write completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 Write completed with error (sct=0, sc=8) 00:34:15.902 starting I/O failed 00:34:15.902 [2024-11-20 08:31:20.401973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.902 [2024-11-20 08:31:20.402310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.402371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbec4000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.402612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.402643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbec4000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.403020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.403027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.403349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.403356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.403697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.403704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.404030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.404037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.404360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.404367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.404653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.404660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.404958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.404965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.405271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.405278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.405556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.405564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.405832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.902 [2024-11-20 08:31:20.405841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.902 qpair failed and we were unable to recover it. 00:34:15.902 [2024-11-20 08:31:20.406223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.406230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.406561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.406569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.406842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.406849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.407166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.407174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.407471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.407478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.407754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.407762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.408045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.408053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.408346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.408353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.408699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.408707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.408983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.408991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.409276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.409283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.409571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.409578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.409854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.409870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.410059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.410066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.410318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.410324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.410527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.410540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.410893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.410900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.411081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.411089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.411443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.411450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.411748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.411755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.412085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.412092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.412391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.412407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.412687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.412694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.412890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.412897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.413215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.413221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.413528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.413535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.413820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.413827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.414032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.414039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.414449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.414456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.414741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.414749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.415039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.415046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.415291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.415299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.415605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.415612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.415896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.415903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.416131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.416137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.416439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.416446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.416750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.416758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.903 [2024-11-20 08:31:20.416921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.903 [2024-11-20 08:31:20.416930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.903 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.417211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.417217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.417485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.417494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.417828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.417835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.418120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.418127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.418478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.418485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.418780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.418787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.419158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.419165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.419449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.419456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.419795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.419803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.420124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.420131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.420424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.420438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.420721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.420728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.421033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.421041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.421345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.421353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.421643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.421651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.421936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.421944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.422198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.422204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.422357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.422364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.422731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.422738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.423042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.423049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.423366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.423373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.423566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.423573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.423908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.423915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.424203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.424210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.424518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.424525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.424844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.424851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.425200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.425207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.425402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.425409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.425753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.425760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.426124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.426131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.426366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.426373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.426694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.426702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.427015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.427022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.427326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.427334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.427531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.427538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.427831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.427837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.428152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.904 [2024-11-20 08:31:20.428159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.904 qpair failed and we were unable to recover it. 00:34:15.904 [2024-11-20 08:31:20.428448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.428462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.428751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.428758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.428925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.428932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.429277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.429284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.429484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.429490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.429849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.429855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.430161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.430168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.430470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.430477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.430772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.430779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.431102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.431109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.431392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.431400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.431715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.431721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.432024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.432031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.432357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.432365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.432670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.432677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.432882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.432890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.433191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.433198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.433563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.433570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.433848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.433855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.434196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.434203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.434533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.434539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.434876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.434883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.435180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.435187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.435547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.435554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.435875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.435882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.436190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.436197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.436266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.436273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.436443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.436450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.436753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.436761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.436947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.436954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.437259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.437266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.437586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.437594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.437891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.437899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.438128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.905 [2024-11-20 08:31:20.438135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.905 qpair failed and we were unable to recover it. 00:34:15.905 [2024-11-20 08:31:20.438396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.438403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.438590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.438603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.438785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.438792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.439135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.439142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.439348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.439354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.439637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.439643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.439962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.439969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.440300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.440308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.440622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.440629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.440938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.440945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.441151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.441158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.441439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.441446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.441810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.441817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.442139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.442146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.442311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.442318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.442597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.442604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.442941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.442948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.443261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.443268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.443578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.443585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.443880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.443887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.444204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.444211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.444504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.444511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.444799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.444805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.445006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.445013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.445309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.445315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.445651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.445657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.445980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.445987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.446288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.446296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.446600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.446608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.446907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.446915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.447216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.447223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.447520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.447527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.447857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.447869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.448166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.448173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.448473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.448480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.448781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.448788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.449087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.449094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.449263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.906 [2024-11-20 08:31:20.449272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.906 qpair failed and we were unable to recover it. 00:34:15.906 [2024-11-20 08:31:20.449584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.449591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.449883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.449891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.450185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.450192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.450488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.450495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.450809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.450815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.451116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.451123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.451440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.451447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.451684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.451691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.452023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.452030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.452339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.452346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.452538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.452544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.452713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.452720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.452987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.452994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.453358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.453365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.453684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.453691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.454003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.454009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.454320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.454327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.454645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.454651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.454937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.454944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.455116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.455124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.455422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.455430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.455698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.455706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.455909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.455916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.456200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.456206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.456513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.456521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.456843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.456850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.457044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.457052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.457327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.457335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.457635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.457643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.457795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.457803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.457990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.457998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.458182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.458190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.458412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.458419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.458602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.458610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.458886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.458894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.459216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.459223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.459557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.459563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.907 [2024-11-20 08:31:20.459873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.907 [2024-11-20 08:31:20.459881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.907 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.460170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.460177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.460487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.460495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.460802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.460809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.461011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.461018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.461343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.461349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.461590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.461597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.461942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.461949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.462257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.462264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.462483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.462489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.462794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.462801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.462905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.462912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.463201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.463208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.463522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.463529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.463871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.463880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.464185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.464192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.464503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.464510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.464681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.464687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.465014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.465021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.465341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.465347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.465641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.465648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.465932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.465949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.466318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.466324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.466615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.466622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.466929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.466936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.467275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.467282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.467582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.467589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.467762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.467769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.468044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.468052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.468347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.468355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.468564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.468571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.468905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.468913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.469230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.469237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.469540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.469547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.469857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.469866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.470034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.470041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.470413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.470420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.470740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.908 [2024-11-20 08:31:20.470747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.908 qpair failed and we were unable to recover it. 00:34:15.908 [2024-11-20 08:31:20.471086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.471093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.471378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.471386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.471704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.471711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.472010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.472017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.472320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.472328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.472648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.472654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.473051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.473058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.473318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.473325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.473640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.473647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.473845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.473852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.474152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.474159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.474475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.474481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.474800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.474807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.474980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.474988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.475277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.475283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.475494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.475500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.475805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.475812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.476161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.476168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.476452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.476460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.476763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.476770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.476975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.476982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.477175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.477182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.477496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.477502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.477753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.477760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.477920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.477928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.478224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.478231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.478419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.478426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.478824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.478830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.479187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.479194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.479491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.479497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.479818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.479826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.480130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.480137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.480449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.480456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.909 [2024-11-20 08:31:20.480648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.909 [2024-11-20 08:31:20.480655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.909 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.480833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.480841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.481152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.481160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.481470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.481478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.481795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.481803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.482107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.482115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.482435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.482442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.482760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.482768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.482987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.482995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.483338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.483346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.483699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.483706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.484005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.484014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.484296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.484304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.484457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.484466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.484798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.484805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.485109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.485116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.485403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.485410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.485726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.485732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.486039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.486047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.486369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.486376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.486689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.486696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.487014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.487021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.487300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.487307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.487634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.487641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.487928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.487936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.488281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.488288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.488460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.488467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.488736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.488743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.489027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.489034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.489356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.489362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.489659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.489667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.489972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.489979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.490291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.490298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.490606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.490613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.490915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.490922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.491145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.491152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.491494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.491501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.491666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.910 [2024-11-20 08:31:20.491674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.910 qpair failed and we were unable to recover it. 00:34:15.910 [2024-11-20 08:31:20.492029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.492037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.492241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.492247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.492622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.492629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.492915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.492922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.493239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.493245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.493406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.493414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.493752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.493759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.494083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.494090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.494416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.494424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.494731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.494738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.495122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.495129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.495416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.495423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.495733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.495740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.496034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.496043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.496345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.496353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.496677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.496684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.496848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.496855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.497108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.497115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.497317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.497325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.497674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.497681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.497880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.497887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.498123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.498131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.498306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.498313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.498584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.498591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.498793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.498800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.499082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.499089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.499381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.499388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.499600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.499607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.499943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.499950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.500269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.500275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.500577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.500584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.500914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.500921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.501236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.501243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.501553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.501560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.501868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.501875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.502198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.502204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.502395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.502402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.502682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.911 [2024-11-20 08:31:20.502690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.911 qpair failed and we were unable to recover it. 00:34:15.911 [2024-11-20 08:31:20.503002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.503010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.503188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.503196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.503509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.503517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.503840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.503847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.504187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.504195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.504408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.504416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.504588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.504596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.504867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.504875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.505182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.505189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.505475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.505482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.505840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.505846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.506150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.506158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.506467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.506474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.506761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.506768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.507052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.507059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.507350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.507359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.507668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.507675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.507773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.507779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.508006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.508013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.508334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.508340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.508662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.508669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.508940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.508948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.509221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.509227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.509615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.509622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.509907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.509921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.510222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.510228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.510543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.510550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.510858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.510875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.511179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.511186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.511497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.511504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.511815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.511823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.512139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.512146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.512348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.512355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.512680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.512687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.512979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.512987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.513200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.513207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.513531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.513537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.912 [2024-11-20 08:31:20.513828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.912 [2024-11-20 08:31:20.513834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.912 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.514024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.514031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.514353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.514360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.514702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.514708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.514918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.514924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.515200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.515206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.515520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.515527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.515841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.515848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.516159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.516166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.516387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.516393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.516710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.516717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.517015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.517023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.517322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.517329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.517652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.517659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.517944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.517951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.518261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.518268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.518581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.518588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.518873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.518880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.519197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.519207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.519467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.519475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.519645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.519652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.519932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.519940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.520276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.520283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.520586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.520594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.520929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.520937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.521197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.521205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.521423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.521430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.521594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.521602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.521909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.521917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.522239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.522247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.522447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.522455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.522763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.522771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.522957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.522966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.523304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.523311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.523577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.523585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.523889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.523897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.524253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.524261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.524564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.524572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.913 qpair failed and we were unable to recover it. 00:34:15.913 [2024-11-20 08:31:20.524895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.913 [2024-11-20 08:31:20.524904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.525202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.525209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.525498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.525506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.525694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.525701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.525879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.525887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.526204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.526212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.526537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.526545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.526876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.526884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.527160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.527167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.527479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.527487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.527816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.527824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.528102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.528110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.528331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.528338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.528556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.528564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.528872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.528880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.529215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.529223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.529422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.529429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.529753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.529761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.530101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.530110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.530395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.530403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.530591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.530600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.530778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.530786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.531131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.531140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.531391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.531399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.531463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.531471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.531767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.531774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.914 qpair failed and we were unable to recover it. 00:34:15.914 [2024-11-20 08:31:20.532106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.914 [2024-11-20 08:31:20.532114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.532276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.532285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.532577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.532585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.532925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.532933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.533131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.533139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.533319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.533327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.533635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.533643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.533917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.533924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.534246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.534253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.534326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.534333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.534666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.534673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.534870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.534877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.535180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.535187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.535389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.535396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.535583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.535590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.535910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.535918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.536135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.536142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.536442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.536448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.536772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.536779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.536961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.536968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.537202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.537209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.537522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.537529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.537816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.537824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.538100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.538107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.538291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.538299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.538468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.538475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.538760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.538768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.539089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.539097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.539423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.539430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.539741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.539749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.540052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.540060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.540228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.540235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.540577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.540584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.540755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.540762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.541046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.541055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.541252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.541259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.541539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.541546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.541874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.915 [2024-11-20 08:31:20.541881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.915 qpair failed and we were unable to recover it. 00:34:15.915 [2024-11-20 08:31:20.542197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.542204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.542432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.542439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.542667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.542675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.543006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.543013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.543307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.543315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.543599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.543607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.543873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.543881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.544080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.544086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.544254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.544262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.544630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.544637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.545012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.545019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.545271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.545278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.545416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.545424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.545750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.545757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.545937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.545944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.546180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.546188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.546455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.546462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.546752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.546759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.547043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.547050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.547348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.547355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.547648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.547654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.547833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.547840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.548181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.548188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.548533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.548540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.548870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.548878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.549166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.549172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.549362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.549369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.549586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.549594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.549908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.549915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.550291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.550298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.550491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.550498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.550700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.550707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.550984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.550992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.551377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.551384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.551698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.551704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.552033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.552041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.552355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.552364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.916 qpair failed and we were unable to recover it. 00:34:15.916 [2024-11-20 08:31:20.552671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.916 [2024-11-20 08:31:20.552678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.553006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.553014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.553208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.553215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.553508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.553515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.553817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.553824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.554173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.554180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.554359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.554365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.554730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.554737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.554915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.554923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.555113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.555120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.555321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.555328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.555616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.555623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.555956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.555964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.556311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.556318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.556600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.556607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.556923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.556930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.557224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.557231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.557545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.557552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.557870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.557877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.558197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.558204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.558488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.558496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.558709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.558717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.558978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.558985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.559178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.559185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.559232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.559239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.559587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.559595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.559774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.559781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.560104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.560112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.560313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.560321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.560511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.560518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.560804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.560811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.561162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.561170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.561345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.561352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.561542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.561549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.561717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.561723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.562031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.562039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.562347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.562354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.562660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.562667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.917 qpair failed and we were unable to recover it. 00:34:15.917 [2024-11-20 08:31:20.562874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.917 [2024-11-20 08:31:20.562881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.563259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.563267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.563580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.563588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.563792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.563799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.563988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.563996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.564345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.564352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.564677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.564684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.565002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.565009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.565313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.565321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.565525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.565533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.565829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.565836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.566140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.566147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.566361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.566368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.566646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.566653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.566976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.566984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.567282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.567290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.567577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.567584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.567761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.567769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.568053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.568061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.568303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.568310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.568611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.568619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.568936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.568944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.569284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.569298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.569587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.569594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.569763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.569770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.570018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.570026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.570330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.570337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.570623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.570631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.570935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.570943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.571242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.571250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.571534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.571541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.571923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.571930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.572287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.572294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.572469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.572477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.572770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.572777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.572984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.918 [2024-11-20 08:31:20.572991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.918 qpair failed and we were unable to recover it. 00:34:15.918 [2024-11-20 08:31:20.573212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.573220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.573357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.573365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.573676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.573683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.573856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.573869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.574164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.574171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.574443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.574450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.574752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.574758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.574938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.574946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.575295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.575301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.575516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.575524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.575863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.575872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.576172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.576179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.576378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.576385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.576697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.576705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.577030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.577039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.577356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.577363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.577570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.577577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.577878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.577886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.578250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.578257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.578574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.578581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.578835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.578841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.579155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.579163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.579455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.579461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.579658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.579665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.580015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.580022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.580360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.580367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.580520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.580528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.580707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.580715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.581014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.581022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.581351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.581358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.581643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.581651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.581963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.581970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.582283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.582292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.582459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.582467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.582632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.582639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.919 [2024-11-20 08:31:20.582839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.919 [2024-11-20 08:31:20.582846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.919 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.583161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.583168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.583356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.583363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.583641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.583649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.583964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.583972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.584358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.584365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.584646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.584653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.584974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.584982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.585308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.585316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.585521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.585527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.585755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.585762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.586013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.586021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.586353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.586360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.586699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.586707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.586868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.586877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.587179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.587186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.587366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.587373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.587730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.587737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.588057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.588064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.588355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.588362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.588658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.588666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.588957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.588965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.589146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.589153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:15.920 [2024-11-20 08:31:20.589435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.920 [2024-11-20 08:31:20.589442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:15.920 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.589758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.589766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.589966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.589973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.590150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.590157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.590356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.590363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.590643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.590650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.590986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.590993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.591309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.591316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.591627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.591634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.591921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.591928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.592254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.592261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.592460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.592467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.592811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.592818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.593000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.593007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.593251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.593260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.593470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.593478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.593784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.593790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.594094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.196 [2024-11-20 08:31:20.594101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-11-20 08:31:20.594418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.594425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.594726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.594734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.594925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.594932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.595198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.595205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.595539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.595547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.595858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.595869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.596153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.596160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.596449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.596465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.596798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.596806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.597120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.597127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.597350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.597357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.597548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.597555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.597597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.597604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.597900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.597908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.598092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.598099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.598404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.598411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.598627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.598634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.598827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.598834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.598992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.598999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.599167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.599175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.599456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.599463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.599764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.599772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.599973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.599981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.600269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.600276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.600596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.600603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.600911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.600919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.601133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.601140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.601453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.601460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.601674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.601689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.602009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.602017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.602384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.602390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.602744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.602751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.602930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.602939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.603281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.603289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.603599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.603606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.603987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.603994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.604328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.604339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-11-20 08:31:20.604652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.197 [2024-11-20 08:31:20.604658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.604976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.604983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.605303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.605310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.605621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.605630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.605830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.605837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.606047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.606055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.606437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.606444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.606831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.606839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.607155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.607162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.607459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.607467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.607797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.607804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.608105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.608112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.608300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.608307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.608632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.608640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.608969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.608976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.609305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.609312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.609672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.609679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.609722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.609729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.610064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.610071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.610286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.610293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.610573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.610580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.610913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.610920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.611207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.611214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.611544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.611551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.611913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.611920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.612255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.612262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.612587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.612595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.612796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.612804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.613090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.613098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.613177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.613184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.613534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.613541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.613867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.613875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.614169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.614177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.614345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.614352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.614642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.614649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.614971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.614979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.615204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.615212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.615507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.615514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-11-20 08:31:20.615881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.198 [2024-11-20 08:31:20.615888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.616063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.616072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.616400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.616409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.616712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.616720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.617123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.617131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.617338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.617345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.617651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.617658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.617999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.618007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.618325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.618334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.618672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.618679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.618967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.618974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.619283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.619290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.619606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.619613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.619948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.619955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.620262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.620270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.620444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.620451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.620737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.620744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.620940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.620947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.621228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.621234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.621561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.621569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.621874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.621881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.622238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.622245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.622519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.622526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.622823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.622829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.623123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.623130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.623421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.623428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.623739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.623746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.624024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.624031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.624335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.624343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.624650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.624657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.624938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.624945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.625155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.625163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.625392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.625399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.625700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.625708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.625879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.625887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.626260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.626266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.626460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.626467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.199 [2024-11-20 08:31:20.626809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.199 [2024-11-20 08:31:20.626816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.199 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.627114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.627122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.627430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.627436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.627520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.627527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.627793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.627802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.628109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.628116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.628434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.628441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.628760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.628766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.629056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.629064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.629384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.629391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.629555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.629563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.629883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.629890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.630205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.630212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.630517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.630524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.630848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.630855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.631025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.631033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.631310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.631316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.631638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.631645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.631959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.631967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.632289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.632296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.632607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.632614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.632924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.632932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.633301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.633308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.633614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.633621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.633727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.633733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.634093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.634100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.634477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.634484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.634796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.634804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.635120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.635128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.635431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.635439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.635751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.635759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.635928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.635937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.636233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.636241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.636547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.636555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.636765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.636772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.637094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.637102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.637408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.637415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.637778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.637786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.200 qpair failed and we were unable to recover it. 00:34:16.200 [2024-11-20 08:31:20.638070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.200 [2024-11-20 08:31:20.638078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.638372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.638379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.638737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.638745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.639071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.639078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.639361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.639368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.639696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.639703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.639996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.640004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.640319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.640326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.640656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.640663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.640934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.640941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.641206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.641212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.641475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.641482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.641785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.641792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.642114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.642121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.642415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.642422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.642737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.642745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.643066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.643073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.643389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.643396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.643564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.643571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.643842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.643849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.644031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.644039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.644336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.644343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.644655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.644662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.644989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.644996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.645272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.645279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.645581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.645587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.645923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.645931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.646238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.646245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.646558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.646565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.646914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.646922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.647228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.647234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.201 qpair failed and we were unable to recover it. 00:34:16.201 [2024-11-20 08:31:20.647387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-11-20 08:31:20.647395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.647676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.647684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.647848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.647856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.648189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.648197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.648493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.648499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.648807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.648814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.649112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.649119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.649405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.649413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.649727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.649733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.650032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.650039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.650357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.650363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.650662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.650669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.650903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.650910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.651115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.651121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.651376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.651384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.651600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.651608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.651913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.651920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.652243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.652251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.652430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.652438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.652791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.652797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.653074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.653082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.653391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.653397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.653682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.653690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.653979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.653986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.654274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.654282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.654652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.654659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.654974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.654981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.655293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.655301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.655466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.655474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.655828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.655836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.656028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.656035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.656303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.656310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.656651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.656658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.656881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.656890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.657213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.657219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.657532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.657539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.657850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.657856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.658158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.202 [2024-11-20 08:31:20.658165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.202 qpair failed and we were unable to recover it. 00:34:16.202 [2024-11-20 08:31:20.658477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.658483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.658784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.658791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.658993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.659000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.659288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.659295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.659456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.659464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.659786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.659793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.660015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.660022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.660336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.660342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.660651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.660659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.660972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.660979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.661318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.661326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.661616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.661623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.661919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.661934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.662241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.662248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.662525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.662532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.662858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.662868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.663157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.663164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.663479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.663489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.663814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.663821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.663993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.664001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.664339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.664346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.664680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.664688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.665003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.665010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.665326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.665334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.665650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.665658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.665825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.665832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.666166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.666174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.666492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.666499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.666816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.666822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.667013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.667020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.667379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.667386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.667702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.667708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.668015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.668022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.668341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.668348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.668640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.668647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.668930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.668937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.669218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.669225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.669609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.203 [2024-11-20 08:31:20.669615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.203 qpair failed and we were unable to recover it. 00:34:16.203 [2024-11-20 08:31:20.669986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.669993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.670293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.670300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.670561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.670568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.670897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.670904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.671215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.671222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.671532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.671538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.671729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.671737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.672036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.672043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.672345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.672353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.672642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.672649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.672953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.672961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.673262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.673269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.673557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.673564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.673883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.673890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.674223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.674230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.674551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.674558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.674869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.674876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.675170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.675176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.675483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.675490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.675775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.675783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.676081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.676089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.676298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.676305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.676616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.676624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.676974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.676981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.677375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.677382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.677653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.677660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.677992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.677999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.678305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.678311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.678639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.678646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.678952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.678960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.679266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.679273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.679587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.679594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.679871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.679878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.680235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.680242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.680427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.680434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.680764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.680771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.681079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.681086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.204 [2024-11-20 08:31:20.681380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.204 [2024-11-20 08:31:20.681387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.204 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.681712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.681719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.682014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.682021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.682328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.682335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.682622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.682630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.682945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.682953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.683254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.683262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.683568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.683575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.683880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.683887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.684208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.684214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.684528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.684535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.684868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.684875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.685192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.685199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.685523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.685531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.685840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.685848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.686194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.686202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.686504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.686511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.686715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.686723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.687022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.687028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.687338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.687345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.687696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.687703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.688004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.688011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.688338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.688346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.688637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.688643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.688929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.688936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.689271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.689278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.689566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.689574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.689872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.689879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.690088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.690094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.690393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.690399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.690731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.690738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.691035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.691042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.691342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.691350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.691659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.691665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.205 qpair failed and we were unable to recover it. 00:34:16.205 [2024-11-20 08:31:20.691978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.205 [2024-11-20 08:31:20.691985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.692307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.692314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.692602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.692609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.692923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.692930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.693221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.693228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.693546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.693553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.693835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.693844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.694154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.694161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.694437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.694444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.694710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.694717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.695043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.695051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.695359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.695367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.695558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.695566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.695872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.695879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.696162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.696170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.696485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.696492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.696872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.696880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.697172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.697179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.697508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.697515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.697803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.697810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.698016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.698023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.698330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.698336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.698724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.698731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.698901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.698909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.699213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.699220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.699527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.699534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.699752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.699759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.699958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.699965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.700190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.700198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.700414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.700421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.700610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.700617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.700791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.700797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.701104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.701112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.701440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.701447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.701758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.701765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.702059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.702066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.702266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.702273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.702595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.206 [2024-11-20 08:31:20.702602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.206 qpair failed and we were unable to recover it. 00:34:16.206 [2024-11-20 08:31:20.702817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.702825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.703040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.703048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.703346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.703354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.703665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.703672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.703880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.703888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.704192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.704198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.704519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.704525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.704836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.704842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.705038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.705045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.705381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.705387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.705705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.705712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.706035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.706042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.706332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.706339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.706647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.706654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.706939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.706947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.707276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.707284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2186932 Killed "${NVMF_APP[@]}" "$@" 00:34:16.207 [2024-11-20 08:31:20.707603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.707611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.707908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.707916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.708135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.708142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 08:31:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:16.207 [2024-11-20 08:31:20.708415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.708423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 08:31:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:16.207 [2024-11-20 08:31:20.708738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.708746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 08:31:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:16.207 [2024-11-20 08:31:20.708967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.708978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.709174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.709181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 08:31:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.207 [2024-11-20 08:31:20.709489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.709496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 08:31:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.207 [2024-11-20 08:31:20.709801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.709809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.710133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.710141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.710347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.710354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.710588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.710596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.710773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.710780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.711071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.711079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.711279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.711286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.711469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.711476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.711797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.711804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.712092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.712100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.712427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.712434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.207 qpair failed and we were unable to recover it. 00:34:16.207 [2024-11-20 08:31:20.712783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.207 [2024-11-20 08:31:20.712790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.713123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.713130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.713515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.713522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.713763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.713770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.714075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.714084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.714379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.714387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.714719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.714727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.715062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.715070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.715269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.715276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.715589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.715596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.715916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.715924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.716228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.716236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.716404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.716413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.716585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.716594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.716904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.716912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.717225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.717233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 08:31:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=2187947 00:34:16.208 [2024-11-20 08:31:20.717543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.717552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 08:31:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 2187947 00:34:16.208 [2024-11-20 08:31:20.717859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.717871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 08:31:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:16.208 08:31:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2187947 ']' 00:34:16.208 [2024-11-20 08:31:20.718179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.718188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 08:31:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.208 [2024-11-20 08:31:20.718487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.718497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 08:31:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:16.208 [2024-11-20 08:31:20.718799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.718808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 08:31:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.208 08:31:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:16.208 [2024-11-20 08:31:20.719138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.719148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 08:31:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.208 [2024-11-20 08:31:20.719478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.719488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.719794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.719802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.720111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.720119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.720417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.720425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.720626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.720634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.720969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.720977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.721159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.721168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.721497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.721505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.721838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.721846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.722140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.722149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.722454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.722462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.722668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.208 [2024-11-20 08:31:20.722675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.208 qpair failed and we were unable to recover it. 00:34:16.208 [2024-11-20 08:31:20.722822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.722830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.723143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.723152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.723461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.723469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.723762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.723771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.724066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.724074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.724402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.724410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.724718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.724726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.724913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.724925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.725223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.725231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.725537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.725546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.725847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.725855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.726168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.726176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.726332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.726340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.726660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.726668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.726977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.726985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.727392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.727400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.727717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.727724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.727934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.727950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.728166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.728173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.728470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.728478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.728795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.728803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.729024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.729031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.729345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.729352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.729561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.729570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.729911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.729919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.730199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.730206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.730411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.730419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.730775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.730783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.731128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.731136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.731451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.731459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.731659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.731666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.731984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.731992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.732305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.732312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.732477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.732484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.732843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.732850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.209 [2024-11-20 08:31:20.733170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.209 [2024-11-20 08:31:20.733177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.209 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.733471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.733478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.733792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.733798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.734122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.734131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.734447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.734454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.734760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.734768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.734957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.734965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.735281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.735288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.735597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.735604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.735916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.735923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.736236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.736243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.736551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.736558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.736721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.736730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.737035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.737043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.737370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.737377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.737680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.737687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.737846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.737854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.738228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.738235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.738421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.738429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.738607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.738621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.738829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.738837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.739027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.739035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.739373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.739382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.739689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.739697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.740019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.740026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.740350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.740357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.740666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.740673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.740958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.740965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.741179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.741187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.741499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.741505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.741819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.741827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.742146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.742154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.742538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.742545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.742873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.210 [2024-11-20 08:31:20.742880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.210 qpair failed and we were unable to recover it. 00:34:16.210 [2024-11-20 08:31:20.743072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.743080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.743390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.743396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.743693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.743706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.744024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.744032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.744332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.744340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.744656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.744664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.744934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.744941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.745151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.745159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.745500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.745507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.745819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.745826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.746134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.746141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.746434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.746441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.746638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.746645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.746944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.746951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.747160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.747175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.747470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.747478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.747784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.747792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.748023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.748030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.748347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.748356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.748665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.748672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.748882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.748889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.749065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.749073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.749396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.749403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.749474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.749480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.749765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.749772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.750061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.750068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.750378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.750385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.750690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.750697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.751009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.751017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.751366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.751374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.751432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.751440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.751716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.751724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.752058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.752066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.752379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.752386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.752674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.752689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.752963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.752970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.753264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.753271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.211 [2024-11-20 08:31:20.753593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.211 [2024-11-20 08:31:20.753600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.211 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.753886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.753893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.754185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.754192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.754481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.754489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.754780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.754787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.755085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.755093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.755297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.755305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.755496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.755504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.755793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.755803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.756099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.756107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.756410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.756418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.756732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.756739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.756923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.756930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.757196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.757203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.757507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.757514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.757805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.757812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.758119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.758126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.758329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.758336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.758678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.758685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.758858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.758873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.759146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.759153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.759557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.759565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.759860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.759873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.760226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.760233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.760548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.760555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.760768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.760774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.761067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.761074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.761362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.761369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.761631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.761638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.761807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.761814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.762009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.762016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.762235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.762242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.762512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.762519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.762828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.762835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.763130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.763137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.763333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.763340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.763674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.763681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.764042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.764049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.212 qpair failed and we were unable to recover it. 00:34:16.212 [2024-11-20 08:31:20.764385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.212 [2024-11-20 08:31:20.764392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.764677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.764684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.764987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.764994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.765392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.765398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.765706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.765713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.765881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.765889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.766100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.766107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.766509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.766516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.766806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.766814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.767151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.767158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.767442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.767451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.767747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.767755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.768062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.768071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.768463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.768470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.768765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.768773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.769075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.769083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.769403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.769412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.769739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.769747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.770051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.770059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.770226] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:34:16.213 [2024-11-20 08:31:20.770278] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.213 [2024-11-20 08:31:20.770374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.770383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.770708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.770714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.771023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.771031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.771351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.771360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.771550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.771558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.771885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.771894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.772213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.772220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.772529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.772536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.772726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.772734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.772771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.772778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.773062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.773071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.773369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.773378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.773557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.773565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.773773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.773781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.774075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.774083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.774386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.774394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.774716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.774724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.774922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.213 [2024-11-20 08:31:20.774930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.213 qpair failed and we were unable to recover it. 00:34:16.213 [2024-11-20 08:31:20.775210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.775218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.775523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.775531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.775827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.775836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.776152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.776160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.776468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.776476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.776661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.776669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.776933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.776941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.777200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.777208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.777384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.777393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.777715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.777724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.778029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.778037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.778346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.778354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.778675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.778683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.778990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.778999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.779176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.779184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.779501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.779509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.779675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.779683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.779978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.779986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.780246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.780255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.780468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.780476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.780778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.780786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.781085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.781093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.781414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.781422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.781734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.781742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.781819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.781826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.782006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.782018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.782323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.782331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.782624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.782632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.782930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.782938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.783236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.783243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.783548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.783554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.783860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.783872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.784190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.784197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.214 [2024-11-20 08:31:20.784486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.214 [2024-11-20 08:31:20.784493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.214 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.784826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.784833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.785145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.785153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.785464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.785471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.785673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.785680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.785842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.785850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.786152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.786160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.786381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.786388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.786666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.786674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.786995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.787002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.787173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.787182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.787485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.787493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.787794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.787802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.788083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.788090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.788421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.788428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.788723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.788730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.789028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.789035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.789345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.789352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.789552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.789560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.789909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.789917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.790239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.790247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.790611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.790619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.790980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.790989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.791309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.791316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.791655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.791663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.791849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.791857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.792063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.792071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.792369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.792376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.792739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.792746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.792989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.792996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.793320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.793327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.793641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.793648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.793982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.793992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.794342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.794350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.794557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.794565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.794740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.794748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.795141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.795148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.795464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.795472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.215 qpair failed and we were unable to recover it. 00:34:16.215 [2024-11-20 08:31:20.795783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.215 [2024-11-20 08:31:20.795790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.796106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.796114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.796440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.796447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.796742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.796749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.797059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.797066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.797377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.797384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.797688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.797695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.798003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.798010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.798258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.798265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.798604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.798611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.798829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.798836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.799146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.799154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.799473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.799481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.799802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.799810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.800124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.800133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.800181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.800188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.800472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.800479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.800814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.800821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.801148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.801155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.801471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.801478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.801794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.801801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.802112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.802120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.802286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.802294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.802498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.802506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.802800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.802808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.802998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.803005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.803348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.803355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.803680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.803688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.803858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.803872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.804208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.804215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.804523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.804531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.804729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.804737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.805023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.805031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.805353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.805360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.805679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.805688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.805994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.806002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.806316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.806331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.216 [2024-11-20 08:31:20.806647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.216 [2024-11-20 08:31:20.806654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.216 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.806945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.806952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.807211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.807218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.807543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.807551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.807865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.807873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.808186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.808194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.808505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.808513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.808704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.808711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.809006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.809014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.809346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.809353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.809655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.809662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.809973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.809981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.810287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.810303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.810610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.810617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.810814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.810821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.811176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.811183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.811509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.811516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.811585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.811593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.811810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.811821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.811885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.811892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.812199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.812206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.812497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.812504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.812811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.812819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.813115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.813122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.813336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.813344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.813526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.813533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.813845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.813852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.814168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.814176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.814498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.814505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.814714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.814721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.814909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.814916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.815206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.815213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.815533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.815540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.815857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.815869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.816045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.816052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.816231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.816238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.816454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.816462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.816774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.816782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.217 qpair failed and we were unable to recover it. 00:34:16.217 [2024-11-20 08:31:20.817099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.217 [2024-11-20 08:31:20.817107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.817313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.817320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.817505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.817512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.817898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.817906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.818168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.818175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.818371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.818377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.818680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.818687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.818891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.818899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.819187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.819195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.819483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.819491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.819701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.819708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.820032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.820040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.820364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.820371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.820577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.820585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.820930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.820938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.821283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.821290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.821622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.821630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.821939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.821947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.822299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.822306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.822571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.822579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.822916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.822923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.823107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.823115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.823413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.823420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.823615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.823622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.824002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.824009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.824161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.824169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.824554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.824561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.824872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.824887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.825110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.825116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.825307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.825314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.825584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.825591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.825669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.825677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.825973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.825980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.826318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.826325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.826656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.826663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.826842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.826850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.827041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.827050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.827242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.218 [2024-11-20 08:31:20.827250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.218 qpair failed and we were unable to recover it. 00:34:16.218 [2024-11-20 08:31:20.827425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.827433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.827756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.827765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.828076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.828083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.828337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.828344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.828724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.828732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.828902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.828910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.829212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.829220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.829551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.829558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.829639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.829646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.829816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.829823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.829913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.829919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.830215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.830223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.830562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.830569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.830889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.830897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.831258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.831265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.831479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.831487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.831666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.831673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.832015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.832022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.832354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.832361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.832683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.832690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.833028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.833035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.833227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.833234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.833524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.833530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.833738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.833745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.834108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.834116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.834285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.834292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.834481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.834488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.834769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.834776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.835101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.835108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.835417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.835424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.835764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.835771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.836090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.836098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.836440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.836447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.219 [2024-11-20 08:31:20.836765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.219 [2024-11-20 08:31:20.836772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.219 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.837082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.837090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.837401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.837409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.837571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.837579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.837879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.837887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.838166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.838173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.838493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.838500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.838860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.838871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.839183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.839194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.839507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.839515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.839692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.839699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.840065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.840072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.840409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.840417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.840759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.840765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.841080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.841087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.841396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.841404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.841729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.841737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.841937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.841944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.842143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.842150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.842509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.842516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.842828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.842836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.843009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.843017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.843223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.843231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.843505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.843512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.843832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.843839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.844036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.844044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.844376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.844384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.844557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.844565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.844749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.844756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.844989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.844998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.845224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.845232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.845544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.845551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.845870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.845878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.846209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.846216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.846527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.846535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.846832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.846840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.847160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.847168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.847502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.847509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.220 qpair failed and we were unable to recover it. 00:34:16.220 [2024-11-20 08:31:20.847829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.220 [2024-11-20 08:31:20.847836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.848144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.848151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.848467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.848474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.848788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.848796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.849162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.849169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.849443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.849451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.849807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.849814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.850120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.850128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.850368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.850376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.850674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.850682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.850991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.850999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.851166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.851173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.851528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.851535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.851743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.851750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.852063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.852072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.852146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.852153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.852502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.852509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.852824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.852831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.853144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.853152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.853438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.853445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.853631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.853638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.853953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.853960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.854023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.854030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.854183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.854191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.854384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.854392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.854567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.854574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.854734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.854742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.854932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.854940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.855256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.855264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.855491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.855499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.855824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.855832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.856011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.856020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.856178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.856187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.856508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.856516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.856794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.856803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.857024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.857032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.857215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.857223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.857584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.221 [2024-11-20 08:31:20.857593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.221 qpair failed and we were unable to recover it. 00:34:16.221 [2024-11-20 08:31:20.857934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.857942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.858022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.858029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.858219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.858226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.858540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.858549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.858860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.858872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.859269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.859277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.859594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.859602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.859930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.859937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.860272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.860280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.860612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.860619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.860857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.860868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.861173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.861180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.861490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.861498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.861674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.861681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.862007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.862016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.862340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.862347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.862665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.862673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.862856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.862867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.862942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.862949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.863202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.863210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.863528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.863535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.863882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.863889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.864094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.864101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.864295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.864301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.864585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.864593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.864929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.864936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.865319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.865327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.865526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.865534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.865568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.865575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.865902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.865910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.866230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.866238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.866517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.866525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.866839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.866846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.867147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.867155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.867450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.867457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.867753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.867760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.868086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.868094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.868382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.222 [2024-11-20 08:31:20.868390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.222 qpair failed and we were unable to recover it. 00:34:16.222 [2024-11-20 08:31:20.868711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.868718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.869069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.869079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.869378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.869386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.869706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.869714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.870014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.870022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.870314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.870322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.870685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.870692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.871140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.871147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.871477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.871484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.871870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.871877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.872184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.872191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.872497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.872504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.872796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.872803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.873128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.873135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.873444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.873451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.873775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.873782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.874126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.874134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.874452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.874460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.874507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.874514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.874781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.874789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.875095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.875103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.875313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.875320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.875654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.875662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.875940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.875947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.876239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.876246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.876455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.876462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.876739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.876745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.877040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.877048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.877353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.877360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.877642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.877649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.877973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.877980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.878266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.878273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.878433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.878441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.878601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.878610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.878877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.878886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.879170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.879178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.879496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.879504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.223 qpair failed and we were unable to recover it. 00:34:16.223 [2024-11-20 08:31:20.879706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:16.223 [2024-11-20 08:31:20.879871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.223 [2024-11-20 08:31:20.879879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.880055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.880062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.880275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.880281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.880549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.880556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.880866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.880874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.881198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.881206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.881391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.881398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.881566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.881575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.881866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.881875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.882175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.882182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.882375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.882382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.882791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.882798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.883083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.883092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.883395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.883402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.883711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.883718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.884005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.884012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.884303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.884310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.884696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.884704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.885005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.885012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.885324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.885331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.885527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.885534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.885858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.885868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.886252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.886260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.886567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.886574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.886792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.886799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.887104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.887112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.887417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.887424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.887799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.887806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.888124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.888132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.888315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.888323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.888634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.888642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.888938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.888946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.889272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.889279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.889464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.224 [2024-11-20 08:31:20.889472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.224 qpair failed and we were unable to recover it. 00:34:16.224 [2024-11-20 08:31:20.889818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.889825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.890167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.890174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.890374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.890380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.890691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.890698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.891013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.891020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.891256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.891263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.891539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.891546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.891769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.891777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.892151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.892159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.892478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.892486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.892809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.892816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.893122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.893129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.893446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.893453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.893746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.893754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.894055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.894062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.894247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.894254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.894578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.894585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.894883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.894891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.895217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.895224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.895529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.895536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.895850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.895857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.896147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.896154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.896522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.896529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.896696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.896707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.896875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.896882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.897151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.897158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.897456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.897463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.897800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.897808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.898113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.898121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.898421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.898428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.898742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.898750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.899084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.899092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.899418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.899425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.899605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.899614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.899927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.899935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.900238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.900245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.900447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.900454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.225 [2024-11-20 08:31:20.900763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.225 [2024-11-20 08:31:20.900770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.225 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.901089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.901096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.901399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.901406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.901725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.901732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.901977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.901985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.902133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.902140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.902447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.902454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.902774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.902781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.902960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.902968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.903270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.903277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.903633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.903640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.903953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.903961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.904368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.904375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.904564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.904572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.904642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.904650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.904955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.904963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.905139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.905147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.905329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.905340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.905671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.905679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.906005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.906013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.906332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.906340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.226 [2024-11-20 08:31:20.906708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.226 [2024-11-20 08:31:20.906715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.226 qpair failed and we were unable to recover it. 00:34:16.503 [2024-11-20 08:31:20.907123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-11-20 08:31:20.907133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-11-20 08:31:20.907454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-11-20 08:31:20.907462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-11-20 08:31:20.907693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-11-20 08:31:20.907699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.908021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.908028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.908360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.908369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.908672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.908680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.908870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.908878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.909167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.909174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.909442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.909450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.909784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.909792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.909984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.909991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.910315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.910323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.910630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.910637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.910923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.910931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.911230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.911237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.911330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.911336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.911601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.911608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.911779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.911787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.911975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.911982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.912323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.912330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.912685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.912693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.912900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.912908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.913225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.913232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.913484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.913491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.913834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.913842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.914214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.914223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.914531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.914539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.914854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.914867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.915138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.504 [2024-11-20 08:31:20.915165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.504 [2024-11-20 08:31:20.915173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.504 [2024-11-20 08:31:20.915179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.504 [2024-11-20 08:31:20.915185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.504 [2024-11-20 08:31:20.915197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.915204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.915428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.915436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.915753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.915760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.915943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.915951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.916171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.916178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.916371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.916378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.916552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.916560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.916871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.916879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-11-20 08:31:20.916822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:16.504 [2024-11-20 08:31:20.916902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:16.504 [2024-11-20 08:31:20.917046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:16.504 [2024-11-20 08:31:20.917046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:16.504 [2024-11-20 08:31:20.917185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-11-20 08:31:20.917197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.917529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.917536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.917858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.917868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.918073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.918080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.918345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.918352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.918727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.918737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.918917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.918925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.919215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.919223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.919534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.919542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.919874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.919881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.920202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.920211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.920408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.920415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.920618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.920626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.920832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.920839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.921052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.921060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.921157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.921164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.921209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.921216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.921443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.921450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.921630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.921638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.922033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.922041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.922379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.922387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.922739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.922747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.923036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.923045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.923368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.923376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.923582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.923589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.923945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.923952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.924271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.924278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.924598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.924605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.924819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.924826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.925139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.925147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.925482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.925489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.925917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.925925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.926114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.926122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.926340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.926348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.926509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.926517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.926707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.926714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.926982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.926990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-11-20 08:31:20.927330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-11-20 08:31:20.927338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.927630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.927638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.927682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.927690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.927974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.927981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.928196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.928203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.928556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.928563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.928865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.928872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.929076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.929083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.929463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.929473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.929640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.929647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.929963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.929970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.930164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.930178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.930485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.930492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.930682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.930689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.931014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.931021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.931331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.931338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.931670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.931677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.931895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.931903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.932093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.932100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.932418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.932425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.932598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.932605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.932887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.932895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.933226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.933234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.933518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.933525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.933850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.933857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.934047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.934054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.934403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.934411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.934587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.934595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.934881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.934889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.935234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.935242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.935415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.935422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.935746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.935753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.936097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.936105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.936314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.936321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.936688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.936695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.937021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.937029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.937358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.937366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.937559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.937566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-11-20 08:31:20.937965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-11-20 08:31:20.937972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.938172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.938179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.938519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.938527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.938729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.938737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.939081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.939089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.939269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.939277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.939552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.939560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.939867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.939876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.940185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.940192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.940512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.940520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.940802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.940813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.941157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.941165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.941324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.941331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.941658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.941667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.941990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.941998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.942165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.942173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.942381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.942390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.942550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.942557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.942743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.942751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.942845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.942852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.943110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.943118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.943415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.943423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.943714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.943723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.943891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.943900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.944089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.944096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.944370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.944378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.944697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.944704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.944853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.944860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.945157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.945165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.945475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.945481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.945783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.945790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.945991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.945998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.946338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.946345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.946646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.946653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.946977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.946985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.947150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.947157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.947388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.947395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.947559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-11-20 08:31:20.947566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-11-20 08:31:20.947869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.947877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.948181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.948188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.948469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.948476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.948543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.948549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.948854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.948861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.949050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.949058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.949346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.949354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.949698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.949705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.949869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.949876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.950157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.950171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.950487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.950494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.950814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.950822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.950999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.951028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.951202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.951209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.951408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.951415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.951754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.951761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.951832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.951838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.952003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.952010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.952300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.952307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.952638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.952645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.952940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.952948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.953249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.953256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.953304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.953310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.953553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.953560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.953639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.953646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.953949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.953957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.954266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.954274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.954446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.954453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.954657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.954664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.954960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.954967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.955255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.955262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.955463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.955470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-11-20 08:31:20.955517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-11-20 08:31:20.955523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.955681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.955689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.955925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.955934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.956282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.956289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.956592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.956599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.956922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.956930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.957251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.957258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.957637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.957644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.957811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.957818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.958066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.958074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.958403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.958410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.958738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.958746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.959048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.959055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.959360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.959368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.959544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.959551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.959722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.959729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.959883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.959891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.960208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.960215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.960582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.960589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.960903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.960910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.961078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.961088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.961291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.961298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.961569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.961576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.961930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.961937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.961989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.961995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.962245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.962253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.962533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.962540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.962932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.962939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.963288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.963295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.963462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.963470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.963775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.963783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.964102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.964109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.964280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.964287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.964438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.964445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.964494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.964501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.964622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.964629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.964944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.964951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.965298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-11-20 08:31:20.965305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-11-20 08:31:20.965491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.965498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.965798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.965805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.966118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.966125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.966455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.966461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.966774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.966781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.966946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.966953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.967167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.967175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.967385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.967392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.967559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.967565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.967851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.967858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.968242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.968249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.968548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.968555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.968874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.968882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.969194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.969201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.969586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.969593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.969880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.969887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.970104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.970112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.970429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.970435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.970633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.970640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.971003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.971010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.971153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.971159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.971497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.971503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.971827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.971836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.971983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.971990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.972309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.972316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.972489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.972496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.972724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.972731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.972771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.972777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.972949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.972957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.973232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.973239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.973591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.973597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.973754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.973761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.973801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.973807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.974094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.974101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.974402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.974409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.974736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.974743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.975067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.975074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.975366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-11-20 08:31:20.975375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-11-20 08:31:20.975577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.975584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.975730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.975738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.975942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.975949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.976278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.976285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.976504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.976512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.976844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.976852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.977170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.977177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.977378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.977385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.977740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.977748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.977909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.977917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.978195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.978203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.978399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.978407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.978731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.978739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.978919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.978926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.979220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.979227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.979452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.979459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.979777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.979783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.980140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.980147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.980322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.980328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.980626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.980633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.980798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.980805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.981103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.981110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.981428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.981435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.981635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.981642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.981851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.981860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.982193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.982199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.982371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.982378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.982759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.982765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.983064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.983072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.983239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.983246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.983526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.983533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.983870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.983878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.984181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.984189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.984576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.984583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.984823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.984831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.985150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.985157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.985325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.985332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.985646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-11-20 08:31:20.985654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-11-20 08:31:20.985859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.985873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.986084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.986090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.986369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.986376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.986694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.986702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.986853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.986865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.987063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.987070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.987366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.987373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.987570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.987578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.987904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.987911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.988127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.988134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.988437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.988444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.988812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.988819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.989119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.989126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.989297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.989304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.989529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.989536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.989872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.989880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.990041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.990049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.990208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.990215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.990441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.990448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.990759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.990765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.990928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.990935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.991155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.991162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.991485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.991492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.991790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.991798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.991836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.991843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.992161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.992168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.992486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.992495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.992812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.992819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.993192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.993199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.993543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.993550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.993869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.993876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.994176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.994184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.994500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.994507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.994821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.994827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.995171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.995178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.995215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.995223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.995593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.995600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-11-20 08:31:20.995759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-11-20 08:31:20.995767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:20.996097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:20.996104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:20.996421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:20.996428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:20.996745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:20.996753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:20.997074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:20.997081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:20.997291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:20.997298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:20.997635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:20.997642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:20.997827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:20.997835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:20.997999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:20.998007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:20.998180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:20.998188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:20.998364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:20.998371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:20.998694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:20.998701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:20.999037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:20.999044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:20.999364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:20.999371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:20.999692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:20.999698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.000028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.000036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.000071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.000079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.000442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.000449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.000761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.000768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.000960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.000968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.001171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.001178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.001464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.001471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.001787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.001793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.002088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.002096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.002264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.002272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.002461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.002468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.002737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.002744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.003074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.003081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.003367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.003375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.003681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.003691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.004008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.004015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.004347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.004354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.004680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.004687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.004848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-11-20 08:31:21.004855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-11-20 08:31:21.005165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.005172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.005348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.005355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.005533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.005540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.005851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.005858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.006146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.006153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.006315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.006322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.006476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.006484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.006797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.006805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.006952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.006959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.007130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.007137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.007417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.007424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.007736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.007743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.007936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.007943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.008279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.008286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.008462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.008469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.008666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.008673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.008926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.008934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.009218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.009224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.009525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.009532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.009865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.009872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.010263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.010270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.010526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.010533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.010822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.010830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.011154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.011162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.011324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.011331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.011564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.011570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.011743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.011750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.012079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.012087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.012256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.012263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.012547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.012554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.012730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.012739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.012921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.012929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.013162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.013169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.013357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.013365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.013760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.013767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.014066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.014075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.014402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.014409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-11-20 08:31:21.014565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-11-20 08:31:21.014572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.014928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.014935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.015273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.015280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.015463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.015470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.015651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.015658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.015847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.015855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.016036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.016043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.016333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.016340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.016517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.016525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.016883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.016891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.017213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.017221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.017572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.017579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.017874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.017888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.018106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.018114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.018399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.018408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.018716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.018724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.019043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.019051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.019215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.019223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.019550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.019557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.019874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.019882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.020197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.020203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.020498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.020506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.020827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.020834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.021134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.021143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.021494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.021500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.021675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.021682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.021973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.021980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.022305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.022313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.022465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.022472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.022630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.022636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.022933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.022941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.023310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.023317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.023630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.023637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.023814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.023821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.023978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.023985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.024200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.024207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.024552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.024559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.024721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.024728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-11-20 08:31:21.025065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-11-20 08:31:21.025074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.025250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.025257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.025484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.025492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.025795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.025803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.025959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.025967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.026112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.026119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.026399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.026406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.026750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.026757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.026989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.026996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.027337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.027344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.027386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.027393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.027737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.027744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.028055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.028062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.028383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.028390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.028701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.028708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.028878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.028886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.029079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.029086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.029253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.029260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.029433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.029441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.029633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.029641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.029951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.029958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.030271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.030278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.030590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.030597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.030760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.030767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.031216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.031224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.031514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.031521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.031696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.031703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.031933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.031940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.032164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.032172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.032440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.032447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.032765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.032773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.032984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.032992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.033285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.033292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.033609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.033617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.033898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.033905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.034212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.034226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.034418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.034425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.034603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-11-20 08:31:21.034609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-11-20 08:31:21.034859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.034869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.035174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.035181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.035496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.035506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.035679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.035686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.035999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.036007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.036186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.036193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.036352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.036360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.036578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.036585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.036793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.036801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.037117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.037125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.037417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.037425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.037629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.037637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.037705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.037712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.038060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.038068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.038250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.038258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.038553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.038561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.038887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.038895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.039216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.039223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.039518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.039532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.039686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.039693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.040017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.040024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.040352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.040359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.040517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.040524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.040565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.040571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.040606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.040613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.040850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.040857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.041092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.041100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.041322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.041329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.041623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.041630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.041836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.041843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.042114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.042122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.042501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.042508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.042686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.042693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-11-20 08:31:21.042990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-11-20 08:31:21.042997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.043217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.043225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.043433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.043441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.043706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.043713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.044043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.044051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.044380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.044388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.044425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.044432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.044744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.044752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.045081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.045089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.045287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.045297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.045576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.045584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.045992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.046000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.046039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.046046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.046236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.046243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.046446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.046454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.046619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.046627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.046808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.046814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.047136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.047144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.047477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.047484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.047776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.047784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.047978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.047986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.048170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.048178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.048537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.048544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.048856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.048867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.049214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.049222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.049505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.049513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.049851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.049859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.050089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.050097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.050426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.050433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.050732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.050739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.051054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.051062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.051351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.051358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.051541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.051549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.051859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.051874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.052219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.052226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.052542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.052550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.052724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.052734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.053046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-11-20 08:31:21.053054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-11-20 08:31:21.053400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.053408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.053611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.053619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.053772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.053780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.053970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.053978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.054265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.054272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.054473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.054481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.054661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.054669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.054984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.054991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.055149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.055157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.055397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.055404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.055696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.055703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.056069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.056077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.056265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.056272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.056570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.056577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.056893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.056901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.057065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.057072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.057447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.057454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.057654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.057661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.058039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.058046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.058372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.058380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.058717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.058724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.058898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.058905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.059143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.059150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.059315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.059323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.059523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.059531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.059711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.059719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.060010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.060018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.060341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.060348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.060688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.060696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.060880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.060889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.061067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.061076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.061263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.061270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.061526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.061534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.061827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.061835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.061886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.061893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.062220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.062226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.062414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.062422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-11-20 08:31:21.062727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-11-20 08:31:21.062734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.063036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.063047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.063114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.063121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.063484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.063492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.063785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.063792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.063985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.063993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.064288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.064295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.064473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.064481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.064788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.064795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.064993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.065001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.065303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.065311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.065644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.065652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.065834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.065842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.066181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.066189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.066373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.066381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.066706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.066713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.066773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.066780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.067080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.067087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.067433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.067440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.067618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.067625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.067971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.067979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.068167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.068174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.068386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.068393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.068750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.068758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.068961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.068968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.069126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.069133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.069422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.069429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.069630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.069638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.069825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.069833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.070206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.070215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.070377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.070385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.070552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.070561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.070864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.070873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.071244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.071253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.071304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.071311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.071485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.071492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.071885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.071893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.072182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.072190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-11-20 08:31:21.072391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-11-20 08:31:21.072398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.072597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.072604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.072799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.072807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.073180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.073189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.073575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.073582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.073780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.073787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.073959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.073967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.074157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.074163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.074494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.074501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.074685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.074692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.074755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.074761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.074811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.074819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.075127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.075134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.075458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.075465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.075634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.075641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.075927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.075935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.076251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.076258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.076590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.076598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.076788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.076796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.076981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.076989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.077179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.077187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.077402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.077410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.077566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.077574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.077897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.077905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.078323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.078329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.078509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.078516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.078696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.078704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.078739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.078746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.078937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.078945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.079279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.079286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.079460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.079467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.079758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.079766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.080100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.080108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.080426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.080433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.080728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.080736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.080952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.080959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.081152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.081159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.081486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.081493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-11-20 08:31:21.081822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-11-20 08:31:21.081829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.082070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.082077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.082273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.082280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.082327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.082334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.082506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.082514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.082722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.082730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.083086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.083098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.083486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.083493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.083679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.083686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.083860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.083875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.084176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.084185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.084397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.084404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.084564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.084572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.084884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.084893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.085214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.085222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.085404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.085410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.085714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.085721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.086097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.086105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.086539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.086547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.086868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.086876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.086910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.086917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.087312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.087319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.087496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.087503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.087871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.087878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.088251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.088258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.088487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.088494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.088729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.088737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.089029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.089038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.089265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.089273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.089587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.089595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.089778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.089786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.090144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.090152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.090321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.090328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-11-20 08:31:21.090561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-11-20 08:31:21.090569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.090902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.090909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.091111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.091117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.091403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.091410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.091729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.091736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.091909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.091916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.092289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.092296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.092622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.092629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.093046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.093054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.093433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.093440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.093797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.093804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.094015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.094022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.094233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.094243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.094412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.094419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.094840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.094847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.094956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.094964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.095289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.095296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.095513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.095520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.095883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.095890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.096294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.096302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.096478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.096485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.096811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.096821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.097162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.097170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.097331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.097338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.097737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.097745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.097930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.097937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.098164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.098172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.098403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.098410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.098625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.098632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.098949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.098957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.099269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.099276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.099678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.099686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.099855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.099865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.100189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.100196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.100392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.100401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.100577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.100585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.100867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.100875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.101043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.101051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-11-20 08:31:21.101367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-11-20 08:31:21.101374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.101656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.101663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.101822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.101831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.102146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.102153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.102476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.102483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.102660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.102667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.102952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.102959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.103002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.103009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.103368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.103374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.103559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.103566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.103954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.103961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.104265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.104272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.104635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.104642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.104817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.104824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.105193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.105202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.105391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.105398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.105594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.105600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.105678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.105685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.105837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.105845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.106149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.106157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.106456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.106464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.106773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.106780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.107145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.107152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.107315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.107323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.107367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.107375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.107529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.107536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.107574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.107581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.107765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.107772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.107937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.107946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.108169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.108177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.108356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.108364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.108678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.108687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.109006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.109013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.109213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.109220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.109390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.109397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.109699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.109706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.109891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.109899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.110244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-11-20 08:31:21.110251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-11-20 08:31:21.110348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.110355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.110574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.110582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.110918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.110925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.111201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.111208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.111549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.111556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.111841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.111849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.112178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.112187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.112363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.112371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.112678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.112686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.113011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.113019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.113218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.113226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.113413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.113421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.113711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.113718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.114075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.114083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.114391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.114399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.114741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.114748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.115060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.115070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.115414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.115421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.115609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.115616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.115831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.115838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.116241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.116248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.116552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.116559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.116899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.116906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.117227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.117234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.117628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.117635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.117947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.117954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.118320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.118327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.118687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.118695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.119009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.119017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.119324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.119332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.119404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.119411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.119552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.119560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.119781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.119788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.120215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.120222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.120511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.120518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.120817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.120825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.121124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.121131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.121344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.525 [2024-11-20 08:31:21.121351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.525 qpair failed and we were unable to recover it. 00:34:16.525 [2024-11-20 08:31:21.121541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.121548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.121733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.121740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.122067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.122075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.122245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.122252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.122428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.122435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.122742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.122749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.122974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.122981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.123291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.123298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.123618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.123626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.123833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.123841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.124157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.124164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.124326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.124333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.124648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.124656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.124969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.124976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.125328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.125335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.125508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.125516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.125830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.125838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.126156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.126164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.126485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.126494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.126685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.126693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.126888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.126897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.127279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.127287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.127579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.127586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.127620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.127626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.127802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.127809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.128119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.128126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.128518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.128525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.128827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.128834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.129072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.129079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.129414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.129421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.129711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.129718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.130018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.130025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.130188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.130195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.130526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.526 [2024-11-20 08:31:21.130533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.526 qpair failed and we were unable to recover it. 00:34:16.526 [2024-11-20 08:31:21.130831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.130838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.131153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.131161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.131475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.131482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.131805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.131812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.132116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.132124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.132161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.132168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.132316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.132323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.132643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.132651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.132977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.132985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.133303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.133310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.133503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.133511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.133827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.133833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.134133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.134140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.134314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.134320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.134645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.134652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.134983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.134990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.135150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.135157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.135390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.135397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.135688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.135695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.135881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.135888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.136184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.136191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.136406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.136413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.136771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.136778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.137103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.137111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.137473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.137482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.137634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.137641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.137920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.137929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.138208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.138215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.138280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.138286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.138549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.138556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.138887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.138894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.139211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.139218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.139347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.139354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.139686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.139693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.139873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.139880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.140003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.140010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.140304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.140311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.527 [2024-11-20 08:31:21.140498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.527 [2024-11-20 08:31:21.140505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.527 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.140896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.140904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.141219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.141226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.141426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.141433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.141762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.141769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.141947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.141954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.142247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.142254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.142589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.142596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.142769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.142776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.142955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.142962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.143125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.143132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.143413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.143420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.143627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.143634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.143979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.143986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.144303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.144310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.144483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.144490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.144705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.144712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.144871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.144879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.145157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.145164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.145435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.145442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.145739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.145747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.145914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.145921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.146214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.146221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.146532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.146538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.146867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.146875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.147042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.147049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.147232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.147239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.147312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.147320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.147634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.147641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.147844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.147851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.148277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.148284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.148654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.148662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.148835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.148843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.149157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.149164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.149399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.149406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.149733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.149740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.150088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.150095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.150388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.150395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.528 qpair failed and we were unable to recover it. 00:34:16.528 [2024-11-20 08:31:21.150700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.528 [2024-11-20 08:31:21.150707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.151029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.151037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.151301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.151308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.151484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.151491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.151782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.151789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.151827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.151834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.152004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.152011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.152198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.152205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.152506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.152513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.152726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.152733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.153080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.153087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.153236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.153243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.153443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.153450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.153769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.153776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.154088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.154096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.154296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.154303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.154588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.154595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.154906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.154913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.155216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.155224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.155538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.155545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.155720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.155727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.155942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.155950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.156133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.156140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.156452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.156459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.156752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.156759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.156934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.156940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.157257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.157264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.157524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.157531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.157832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.157839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.158022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.158031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.158319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.158326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.158493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.158499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.158709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.158716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.158896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.158904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.159102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.159109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.159437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.159444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.159625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.159633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.159934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.529 [2024-11-20 08:31:21.159941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.529 qpair failed and we were unable to recover it. 00:34:16.529 [2024-11-20 08:31:21.160160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.160167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.160317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.160323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.160489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.160497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.160815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.160822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.160988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.160995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.161204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.161211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.161366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.161373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.161571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.161578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.161762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.161770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.162015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.162023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.162323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.162330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.162629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.162643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.162682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.162688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.163056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.163063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.163354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.163362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.163533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.163540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.163755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.163762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.164129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.164137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.164330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.164338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.164657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.164664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.164821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.164828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.164947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.164954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.165353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.165448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbec4000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.165756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.165793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbec4000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.166350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.166441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbec4000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.166890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.166929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbec4000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.167249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.167258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.167293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.167299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.167509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.167516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.167904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.167912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.167953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.167960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.168119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.168127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.168444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.168451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.168603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.168611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.168773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.168780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.169096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.169104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.169453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.169460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.169777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.530 [2024-11-20 08:31:21.169784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.530 qpair failed and we were unable to recover it. 00:34:16.530 [2024-11-20 08:31:21.170073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.170080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.170263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.170269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.170570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.170577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.170907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.170915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.171013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.171020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.171321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.171327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.171466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.171473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.171593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.171600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.171723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.171730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.172047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.172055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.172216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.172223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.172418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.172427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.172749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.172756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.172799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.172805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.172975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.172983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.173054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.173062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.173372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.173380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.173688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.173694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.173998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.174006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.174318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.174326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.174726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.174733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.175030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.175038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.175346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.175352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.175524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.175531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.175850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.175857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.176168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.176175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.176515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.176522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.176811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.176819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.176965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.176973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.177134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.177140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.177318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.177325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.177540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.177547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.177741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.177748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.531 qpair failed and we were unable to recover it. 00:34:16.531 [2024-11-20 08:31:21.177953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.531 [2024-11-20 08:31:21.177962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.178298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.178305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.178483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.178491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.178681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.178689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.178927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.178934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.179170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.179178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.179493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.179500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.179670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.179677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.179962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.179969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.180157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.180164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.180337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.180344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.180560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.180568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.180742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.180749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.181067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.181074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.181234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.181242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.181485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.181492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.181771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.181780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.181972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.181979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.182344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.182351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.182642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.182650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.182946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.182953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.183290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.183298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.183611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.183619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.183926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.183933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.184123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.184130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.184428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.184434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.184735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.184743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.185029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.185038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.185341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.185349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.185647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.185654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.185976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.185984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.186169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.186175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.186493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.186500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.186837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.186844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.187061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.187069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.187424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.187431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.187753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.187760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.187795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.532 [2024-11-20 08:31:21.187803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.532 qpair failed and we were unable to recover it. 00:34:16.532 [2024-11-20 08:31:21.188105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.188113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.188296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.188303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.188591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.188599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.188926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.188933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.189260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.189267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.189621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.189629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.189927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.189935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.190129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.190136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.190499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.190506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.190838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.190845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.191171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.191178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.191468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.191476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.191807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.191814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.192116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.192125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.192323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.192331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.192660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.192668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.193068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.193076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.193262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.193268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.193583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.193591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.193655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.193662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.194005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.194013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.194328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.194336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.194393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.194399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.194654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.194661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.194982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.194990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.195155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.195162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbebc000b90 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.195663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.195702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.196075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.196089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.196423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.196434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.196623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.196639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.196932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.196943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.197271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.197310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.197672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.197685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.197844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.197854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.198155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.198193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.198410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.198423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.198603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.198614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.198978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.533 [2024-11-20 08:31:21.198990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.533 qpair failed and we were unable to recover it. 00:34:16.533 [2024-11-20 08:31:21.199178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.199188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.199385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.199395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.199445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.199454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.199789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.199799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.199975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.199986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.200250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.200268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.200580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.200591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.200754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.200764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.200930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.200941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.201236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.201245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.201549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.201559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.201878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.201889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.202161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.202171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.202512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.202523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.202838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.202848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.203181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.203191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.203498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.203507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.203677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.203687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.203993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.204006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.204207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.204217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.204567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.204577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.204868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.204878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.205182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.205192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.205383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.205393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.205596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.205606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.205794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.205804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.206077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.206087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.206373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.206383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.206711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.206720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.207071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.207082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.207391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.207401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.207742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.207753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.207936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.207946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.208250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.208260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.208588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.208598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.208643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.208652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.208805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.208815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.209114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.209124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.534 [2024-11-20 08:31:21.209403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.534 [2024-11-20 08:31:21.209414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.534 qpair failed and we were unable to recover it. 00:34:16.535 [2024-11-20 08:31:21.209726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.535 [2024-11-20 08:31:21.209737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.535 qpair failed and we were unable to recover it. 00:34:16.535 [2024-11-20 08:31:21.210031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.535 [2024-11-20 08:31:21.210041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.535 qpair failed and we were unable to recover it. 00:34:16.535 [2024-11-20 08:31:21.210332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.535 [2024-11-20 08:31:21.210343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.535 qpair failed and we were unable to recover it. 00:34:16.535 [2024-11-20 08:31:21.210644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.535 [2024-11-20 08:31:21.210654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.535 qpair failed and we were unable to recover it. 00:34:16.535 [2024-11-20 08:31:21.210820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.535 [2024-11-20 08:31:21.210830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.535 qpair failed and we were unable to recover it. 00:34:16.535 [2024-11-20 08:31:21.211174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.535 [2024-11-20 08:31:21.211185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.535 qpair failed and we were unable to recover it. 00:34:16.535 [2024-11-20 08:31:21.211491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.535 [2024-11-20 08:31:21.211502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.535 qpair failed and we were unable to recover it. 00:34:16.535 [2024-11-20 08:31:21.211711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.535 [2024-11-20 08:31:21.211722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.535 qpair failed and we were unable to recover it. 00:34:16.535 [2024-11-20 08:31:21.211928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.535 [2024-11-20 08:31:21.211938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.535 qpair failed and we were unable to recover it. 00:34:16.535 [2024-11-20 08:31:21.212326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.535 [2024-11-20 08:31:21.212335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.535 qpair failed and we were unable to recover it. 00:34:16.535 [2024-11-20 08:31:21.212644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.535 [2024-11-20 08:31:21.212654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.535 qpair failed and we were unable to recover it. 00:34:16.535 [2024-11-20 08:31:21.212963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.535 [2024-11-20 08:31:21.212973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.535 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.213172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.213184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.213508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.213519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.213696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.213706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.214069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.214079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.214398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.214408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.214697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.214707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.215020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.215030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.215202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.215212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.215370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.215381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.215670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.215680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.215872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.215882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.216192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.216202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.216512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.216522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.216845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.216854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.217248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.217258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-11-20 08:31:21.217430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-11-20 08:31:21.217440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.217662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.217672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.217866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.217877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.218198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.218208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.218491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.218501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.218586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.218595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.218791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.218801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.219109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.219119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.219441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.219450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.219763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.219773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.220015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.220025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.220392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.220402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.220714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.220724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.221004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.221014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.221293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.221303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.221479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.221488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.221814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.221824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.222024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.222034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.222423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.222433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.222730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.222740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.222780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.222790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.222946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.222956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.223298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.223308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.223480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.223490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.223765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.223774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.223979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.223989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.224336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.224345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.224662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.224672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.224981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.224991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.225371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.225382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.225684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.225694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.225990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.225999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.226166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.226176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.226399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.226409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.226593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.226602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.226880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.226891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.227228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.227239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-11-20 08:31:21.227431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-11-20 08:31:21.227441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.227797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.227807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.228137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.228148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.228314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.228325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.228667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.228678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.229022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.229033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.229226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.229236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.229477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.229487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.229802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.229811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.230123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.230133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.230444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.230454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.230654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.230664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.230858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.230872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.231079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.231089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.231321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.231331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.231540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.231549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.231877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.231887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.232280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.232290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.232601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.232612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.232927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.232937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.233235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.233245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.233413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.233423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.233758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.233768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.234092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.234103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.234283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.234295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.234478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.234488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.234630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.234640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.234808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.234819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.235184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.235194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.235483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.235493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.235735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.235746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.235795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.235805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.236146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.236157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.236470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.236480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.236775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.236786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.237096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.237106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-11-20 08:31:21.237423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-11-20 08:31:21.237433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.237744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.237755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.238035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.238046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.238373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.238383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.238697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.238707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.238920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.238931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.239114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.239124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.239343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.239352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.239663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.239672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.239987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.239997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.240298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.240308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.240497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.240507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.240721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.240730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.241038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.241048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.241357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.241368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.241542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.241552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.241878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.241888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.242060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.242070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.242375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.242385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.242696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.242706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.243005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.243016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.243330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.243340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.243628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.243637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.243794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.243803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.244080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.244091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.244417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.244427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.244713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.244723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.244798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.244808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.245108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.245118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.245294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.245303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.245691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.245701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.245980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.245990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.246312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.246322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.246610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.246620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.246795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.246804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.246997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.247007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.247199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.247209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.247467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.247478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-11-20 08:31:21.247784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-11-20 08:31:21.247795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.247835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.247844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.248049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.248060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.248399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.248408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.248718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.248727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.249073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.249084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.249246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.249256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.249436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.249446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.249737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.249747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.249946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.249956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.250257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.250268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.250548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.250557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.250724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.250734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.251115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.251125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.251494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.251505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.251826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.251836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.252017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.252027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.252440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.252449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.252755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.252767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.252957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.252967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.253418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.253428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.253712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.253722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.254048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.254058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.254342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.254352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.254539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.254548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.254837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.254847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.255167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.255177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.255489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.255499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.255822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.255832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.256006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.256016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.256358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.256367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.256702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.256712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.257028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.257039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.257336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.257346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.257532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.257542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.257711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.257721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.257925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.257936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.258249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.258259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-11-20 08:31:21.258424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-11-20 08:31:21.258434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.258789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.258799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.259112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.259123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.259271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.259288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.259628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.259638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.259686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.259696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.259981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.259991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.260177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.260187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.260392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.260402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.260687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.260698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.260886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.260896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.261114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.261124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.261337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.261347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.261660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.261670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.261840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.261850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.262134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.262144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.262505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.262515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.262804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.262814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.263134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.263155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.263323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.263332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.263555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.263564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.263914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.263926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.264209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.264219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.264380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.264391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.264746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.264756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.265158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.265169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.265370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.265380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.265713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.265723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.266030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.266040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.266344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.266354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.266674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.266683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.266966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.266976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-11-20 08:31:21.267190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-11-20 08:31:21.267200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.267511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.267521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.267775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.267784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.268100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.268110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.268404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.268413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.268772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.268782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.269080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.269091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.269446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.269456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.269500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.269509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.269789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.269799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.270035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.270045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.270348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.270358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.270554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.270563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.270886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.270896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.271262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.271272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.271613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.271623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.271942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.271954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.272074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.272083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.272278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.272288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.272623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.272633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.272848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.272858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.273145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.273155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.273478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.273489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.273816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.273826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.274143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.274154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.274462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.274472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.274636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.274646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.274951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.274961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.275283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.275293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.275602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.275612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.275813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.275823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.276001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.276012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.276219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.276228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.276540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.276550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.276839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.276850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.277063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.277073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.277367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.277377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.277691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.277700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.277987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.277997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-11-20 08:31:21.278308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-11-20 08:31:21.278318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.278500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.278510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.278816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.278826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.279038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.279048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.279440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.279450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.279740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.279750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.280122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.280132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.280463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.280473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.280629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.280639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.280949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.280958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.281034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.281043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.281216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.281225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.281431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.281441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.281740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.281750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.282055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.282065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.282348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.282358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.282520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.282530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.282869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.282879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.283165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.283177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.283367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.283378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.283709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.283720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.283926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.283937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.284255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.284265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.284580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.284590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.284781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.284791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.284869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.284879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.285189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.285199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.285483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.285493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.285540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.285549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.285750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.285760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.286080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.286090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.286381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.286391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.286718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.286728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.287019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.287037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.287348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.287358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.287562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.287572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.287879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.287889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.288064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.288074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-11-20 08:31:21.288400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-11-20 08:31:21.288410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.288537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.288546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.288868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.288885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.289221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.289232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.289446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.289457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.289636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.289648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.289835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.289845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.290190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.290203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.290521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.290532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.290581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.290591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.290756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.290767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.291095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.291106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.291424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.291434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.291821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.291832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.292118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.292128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.292349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.292358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.292543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.292553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.292875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.292885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.293100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.293110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.293209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.293218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.293409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.293418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.293551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.293561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.293651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.293661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.294031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.294042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.294232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.294242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.294560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.294570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.294854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.294874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.295074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.295083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.295278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.295287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.295452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.295461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.295645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.295654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.295983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.295994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.296259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.296269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.296596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.296606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.296794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.296804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.297030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.297041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.297361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.297371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.297547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.297557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.297735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-11-20 08:31:21.297745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-11-20 08:31:21.298045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.298056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.298256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.298267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.298592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.298602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.298887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.298897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.299253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.299263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.299584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.299594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.299880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.299890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.300168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.300178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.300498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.300507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.300835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.300848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.301037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.301047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.301379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.301388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.301681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.301691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.301879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.301890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.301967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.301976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.302296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.302306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.302617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.302627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.302841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.302851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.303182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.303192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.303505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.303514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.303739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.303749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.303987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.303998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.304312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.304321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.304657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.304667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.304848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.304857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.305174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.305184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.305350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.305360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.305689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.305699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.306025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.306035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.306336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.306347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.306656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.306666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.306816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.306826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.307158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.307168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-11-20 08:31:21.307476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-11-20 08:31:21.307486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.307692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.307701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.307780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.307790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.308092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.308105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.308441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.308451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.308664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.308674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.309005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.309016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.309347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.309357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.309518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.309528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.309842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.309853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.310039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.310049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.310371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.310380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.310542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.310553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.310869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.310880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.311194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.311205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.311491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.311502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.311831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.311841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.312167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.312184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.312483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.312493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.312737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.312746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.312938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.312949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.313269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.313279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.313666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.313676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.313933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.313943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.314243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.314253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.314442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.314451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.314752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.314762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.315079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.315088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.315418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.315429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.315736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.315747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.316049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.316059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.316341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.316350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.316718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.316728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.317001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.317011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.317313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.317323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.317515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.317525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.317732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.317743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.318014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.318024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-11-20 08:31:21.318300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-11-20 08:31:21.318310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.318498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.318508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.318671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.318681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.319003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.319013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.319235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.319244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.319294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.319304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.319584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.319596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.319925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.319936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.320153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.320163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.320353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.320364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.320743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.320753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.320960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.320970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.321229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.321239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.321414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.321423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.321742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.321761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.321930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.321941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.322205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.322215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.322439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.322450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.322773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.322783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.322964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.322974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.323163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.323173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.323362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.323371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.323558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.323569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.323789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.323799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.323974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.323985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.324388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.324398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.324688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.324699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.324829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.324840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.325183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.325193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.325355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.325365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.325747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.325756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.326077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.326088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.326151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.326161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.326470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.326481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.326797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.326809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.327123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.327133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.327453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.327469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.327680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.327691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-11-20 08:31:21.328023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-11-20 08:31:21.328034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.328209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.328219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.328547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.328557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.328625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.328635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.328679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.328689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.328857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.328873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.329186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.329197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.329390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.329400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.329739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.329749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.329801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.329811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.330149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.330159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.330357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.330367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.330596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.330606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.330909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.330920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.331095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.331105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.331538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.331547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.331745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.331754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.332052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.332062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.332234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.332243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.332546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.332556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.332890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.332901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.333197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.333215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.333528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.333538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.333818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.333828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.334036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.334048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.334246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.334256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.334594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.334604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.334941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.334952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.335260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.335270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.335604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.335613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.335909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.335919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.336240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.336249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.336464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.336474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.336801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.336811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.337161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.337171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.337389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.337399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.337571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.337583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.337878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.337891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-11-20 08:31:21.338218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-11-20 08:31:21.338229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.338422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.338433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.338716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.338727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.338906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.338917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.339299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.339308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.339508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.339518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.339854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.339868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.340048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.340058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.340343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.340353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.340541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.340551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.340632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.340641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.340899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.340910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.341232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.341243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.341561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.341572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.341777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.341786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.341966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.341976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.342292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.342302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.342600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.342610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.342939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.342950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.343297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.343308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.343495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.343505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.343783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.343792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.344144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.344155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.344320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.344330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.344655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.344665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.344835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.344845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.345163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.345174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.345543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.345553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.345845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.345855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.346045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.346056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.346353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.346362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.346572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.346583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.346767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.346777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.346987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.346997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.347213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.347222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.347537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.347547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.347605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.347615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.347804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.347814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.348018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-11-20 08:31:21.348028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-11-20 08:31:21.348370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.348383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.348693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.348703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.349012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.349024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.349250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.349261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.349441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.349451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.349625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.349635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.349954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.349964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.350144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.350154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.350382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.350393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.350717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.350729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.351057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.351068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.351114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.351123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.351325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.351334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.351656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.351666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.351849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.351859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.352118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.352128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.352443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.352454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.352648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.352657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.352964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.352974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.353275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.353286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.353452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.353462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.353504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.353513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.353755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.353766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.353929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.353942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.354310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.354320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.354656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.354667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.354846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.354859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.355083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.355096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.355334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.355344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.355540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.355551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.355720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.355730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.355969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.355979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.356170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.356179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.356463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-11-20 08:31:21.356473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-11-20 08:31:21.356648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.356659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.356997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.357007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.357308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.357318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.357608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.357618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.357662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.357672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.357992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.358002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.358162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.358172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.358346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.358357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.358658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.358668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.358981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.358992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.359313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.359323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.359592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.359603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.359893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.359903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.360091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.360101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.360448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.360458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.360625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.360636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.360821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.360831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.361132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.361142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.361374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.361384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.361577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.361587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.361873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.361883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.362207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.362218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.362533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.362543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.362621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.362630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.362965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.362976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.363303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.363312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.363646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.363656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.363821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.363830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.364050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.364062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.364363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.364373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.364680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.364690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.365014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.365025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.365367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.365377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.365539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.365549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.365782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.365794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.366087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.366098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.366332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.366343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.366684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-11-20 08:31:21.366695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-11-20 08:31:21.366944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.366954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.367246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.367256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.367432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.367443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.367791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.367801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.368015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.368026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.368352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.368362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.368570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.368580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.368901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.368912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.369234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.369244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.369646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.369656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.369979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.369989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.370034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.370044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.370246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.370256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.370623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.370633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.370815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.370824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.371183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.371194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.371567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.371577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.371868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.371879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.372084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.372094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.372404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.372414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.372725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.372734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.372906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.372916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.373139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.373155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.373352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.373365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.373708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.373719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.373952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.373962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.374276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.374287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.374332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.374341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.374543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.374552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.374613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.374623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.374938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.374948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.375133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.375143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.375463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.375473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.375640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.375650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.375696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.375706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.376023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.376034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.376403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.376412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.376749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.376759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-11-20 08:31:21.377093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-11-20 08:31:21.377105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.377417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.377428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.377718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.377727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.378028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.378038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.378344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.378354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.378600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.378610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.378946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.378956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.379314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.379324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.379635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.379645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.379941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.379951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.380256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.380266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.380423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.380432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.380751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.380761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.381074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.381085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.381239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.381249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.381548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.381558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.381873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.381884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.382208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.382219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.382522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.382540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.382719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.382729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.383004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.383014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.383348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.383357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.383650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.383661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.383975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.383986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.384299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.384308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.384627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.384638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.384932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.384944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.385194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.385204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.385491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.385503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.385784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.385794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.386096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.386107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.386299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.386309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.386455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.386467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.386770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.386780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.387111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.387122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.387429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.387439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.387735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.387745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.387903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.387913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-11-20 08:31:21.388213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-11-20 08:31:21.388224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.388505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.388515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.388824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.388833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.389020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.389031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.389307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.389317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.389654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.389664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.389972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.389984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.390233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.390243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.390416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.390428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.390592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.390602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.390920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.390930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.391261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.391271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.391458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.391467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.391794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.391803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.392132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.392142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.392187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.392199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.392583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.392593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.392927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.392938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.392988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.392999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.393305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.393315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.393473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.393483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.393819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.393829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.394138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.394148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.394330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.394340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.394641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.394652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.394926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.394937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.395161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.395171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.395474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.395484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.395818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.395828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.396120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.396130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.396423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.396433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.396749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.396760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.397079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.397090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.397405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.397415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.397702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.397711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.397875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.397885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.398196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.398206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.398506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.398516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.398675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-11-20 08:31:21.398685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-11-20 08:31:21.398866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.398877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.399050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.399060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.399378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.399389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.399694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.399704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.399875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.399886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.400190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.400199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.400503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.400513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.400711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.400721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.401012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.401022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.401331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.401341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.401507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.401516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.401830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.401839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.402170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.402181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.402482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.402492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.402796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.402806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.402970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.402980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.403160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.403169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.403459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.403479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.403642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.403652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.403739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.403749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.403930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.403941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.404245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.404255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.404444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.404454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.404659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.404669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.404876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.404887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.405168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.405178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.405475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.405485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.405659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.405668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.406018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.406028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.406356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.406366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.406654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.406664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.406841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.406852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.407123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-11-20 08:31:21.407132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-11-20 08:31:21.407307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.407324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.407628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.407638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.407951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.407961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.408268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.408278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.408593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.408603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.408794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.408804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.409207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.409218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.409527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.409537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.409847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.409856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.410208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.410218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.410547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.410556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.410875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.410885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.411231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.411241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.411386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.411396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.411617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.411627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.411946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.411957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.412156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.412166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.412409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.412419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.412733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.412743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.413026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.413036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.413362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.413372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.413552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.413563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.413788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.413798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.414082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.414093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.414447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.414458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.414749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.414760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.415062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.415072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.415470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.415480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.415784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.415794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.416128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.416139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.416338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.416348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.416622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.416632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.416945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.416956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.417260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.417270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.417575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.417584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.417888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.417898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.418216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.418226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.418540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.418551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-11-20 08:31:21.418638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-11-20 08:31:21.418648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.418826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.418836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.419165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.419175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.419466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.419476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.419773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.419782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.420105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.420115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.420270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.420281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.420479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.420490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.420829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.420839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.421129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.421139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.421456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.421465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.421595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.421604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.421970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.421980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.422289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.422299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.422659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.422671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.422867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.422878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.423267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.423277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.423565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.423575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.423762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.423771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.423934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.423944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.424238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.424248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.424443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.424454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.424752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.424763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.424945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.424955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.425313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.425323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.425509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.425520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.425787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.425798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.425848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.425858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.426152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.426163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.426511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.426522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.426865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.426876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.427178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.427188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.427340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.427350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.427629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.427640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.427850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.427860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.428187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.428197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.428504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.428513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.428672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.428682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.428840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-11-20 08:31:21.428849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-11-20 08:31:21.429179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.429189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.429394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.429404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.429473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.429482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.429777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.429787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.429833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.429842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.430047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.430058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.430222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.430232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.430412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.430422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.430690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.430700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.431015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.431026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.431354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.431364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.431697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.431707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.431900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.431910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.432231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.432240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.432588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.432598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.432936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.432947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.433266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.433283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.433473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.433483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.433673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.433683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.433919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.433930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.434012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.434021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.434311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.434321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.434369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.434378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.434646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.434655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.434824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.434833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.435070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.435080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.435410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.435420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.435700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.435710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.436094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.436104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.436276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.436286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.436557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.436567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.436878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.436888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.437199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.437209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.437497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.437508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.437696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.437705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.437992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.438003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.438172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.438182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.438513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-11-20 08:31:21.438523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-11-20 08:31:21.438828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.438838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.438884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.438894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.439087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.439097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.439411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.439420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.439711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.439721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.440018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.440030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.440470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.440480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.440788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.440798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.441114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.441124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.441315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.441324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.441694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.441703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.442002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.442012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.442342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.442351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.442671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.442682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.442977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.442987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.443151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.443161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.443528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.443538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.443698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.443708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.444071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.444082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.444128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.444137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.444416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.444425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.444612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.444622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.444940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.444951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.445127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.445136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.445523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.445534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.445846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.445857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.446214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.446225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.446543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.446553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.446743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.446753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.447070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.447081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.447490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.447500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.447811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.447821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.447988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.447999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.448346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-11-20 08:31:21.448356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-11-20 08:31:21.448397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.448405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.448692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.448703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.448868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.448878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.449074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.449084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.449399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.449409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.449567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.449577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.449848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.449858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.450154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.450165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.450476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.450487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.450673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.450684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.450903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.450913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.451204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.451214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.451376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.451389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.451649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.451659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.451869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.451879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.452185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.452195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.452469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.452479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.452782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.452791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.453124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.453135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.453338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.453349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.453536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.453548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.453714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.453725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.454022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.454032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.454346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.454361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.454712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.454722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.455019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.455029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.455348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.455357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.455674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.455685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.455876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.455886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.456205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.456215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.456396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.456414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.456720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.456730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.456975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.456985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.457033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.457044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.457344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.457353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.457683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.457692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.457998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.458009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.458409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.458419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-11-20 08:31:21.458758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-11-20 08:31:21.458768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.458943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.458955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.459271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.459281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.459489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.459499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.459813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.459823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.460009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.460020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.460194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.460204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.460484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.460495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.460692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.460702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.461001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.461011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.461386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.461396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.461705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.461714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.461923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.461933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.462244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.462254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.462420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.462430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.462708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.462718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.463003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.463014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.463332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.463342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.463508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.463517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.463834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.463844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.464213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.464223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.464396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.464405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.464598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.464608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.464908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.464919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.465082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.465091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.465455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.465466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.465838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.465848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.466195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.466205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.466514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.466524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.466825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.466835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.467125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.467136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.467457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.467467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.467677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.467687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.467992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.468003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.468347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.468357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.468688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.468698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.468912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.468922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.469246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.469256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-11-20 08:31:21.469427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-11-20 08:31:21.469437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.469734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.469745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.470087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.470097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.470423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.470433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.470782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.470796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.471084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.471095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.471415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.471424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.471591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.471601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.471963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.471974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.472307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.472317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.472662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.472672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.472816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.472825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.473220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.473230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.473278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.473287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.473551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.473561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.473763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.473774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.473967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.473977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.474328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.474338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.474645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.474655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.474966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.474977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.475232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.475241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.475447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.475457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.475634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.475644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.475950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.475961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.476280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.476290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.476453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.476463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.476739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.476749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.477073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.477084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.477403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.477412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.477801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.477811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.478141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.478152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.478348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.478360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.478542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.478552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.478870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.478881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.479190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.479200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.479519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.479529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.479719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.479729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.479910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.479921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-11-20 08:31:21.480201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-11-20 08:31:21.480211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.480501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.480517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.480837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.480847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.481129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.481139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.481437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.481446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.481664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.481674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.482030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.482040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.482429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.482439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.482748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.482758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.483088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.483098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.483406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.483416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.483573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.483583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.483882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.483893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.484203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.484213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.484259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.484268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.484622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.484632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.484816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.484826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.485149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.485159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.485473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.485483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.485792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.485802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.486116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.486126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.486434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.486444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.486765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.486775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.487153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.487163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.487326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.487336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.487673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.487683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.487872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.487882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.488181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.488191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.488485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.488497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.488814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.488824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.489190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.489201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.489502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.489512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.489878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.489889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.490206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.490217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.490565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.490579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.490899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.490916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.491234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.491244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.491560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.491571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.491886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-11-20 08:31:21.491897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-11-20 08:31:21.492299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.492309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.492603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.492613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.492943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.492954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.493123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.493133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.493403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.493412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.493719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.493729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.494030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.494041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.494221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.494232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.494555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.494566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.494772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.494782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.495102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.495113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.495419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.495430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.495740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.495751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.496035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.496046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.496248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.496259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.496481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.496491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.496845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.496855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.497158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.497169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.497482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.497492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.497680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.497690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.497769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.497778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.498065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.498075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.498425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.498435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.498766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.498777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.498870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.498881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.499058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.499068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.499359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.499370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.499698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.499709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.500009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.500020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.500335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.500345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.500671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.500681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.500849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.500859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.501035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.501045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.501238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-11-20 08:31:21.501248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-11-20 08:31:21.501571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.501581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.501873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.501884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.502066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.502075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.502378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.502387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.502693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.502703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.502988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.503000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.503312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.503323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.503545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.503555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.503871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.503883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.504203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.504213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.504399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.504409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.504765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.504775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.504966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.504976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.505367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.505377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.505665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.505676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.505850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.505859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.506183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.506193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.506363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.506374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.506660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.506670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.507064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.507075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.507414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.507424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.507752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.507762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.507920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.507930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.508324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.508334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.508617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.508628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.508997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.509008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.509320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.509331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.509564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.509574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.509618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.509627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.510015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.510028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.510356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.510366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.510678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.510688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.511042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.511053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.511364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.511375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.511562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.511572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.511923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.511934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.512162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.512173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-11-20 08:31:21.512341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-11-20 08:31:21.512352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.512550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.512560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.512743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.512753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.513096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.513107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.513343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.513353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.513526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.513537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.513721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.513732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.514050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.514061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.514356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.514367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.514754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.514769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.514941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.514951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.515281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.515292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.515606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.515617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.515942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.515953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.516282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.516292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.516670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.516680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.516966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.516977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.517358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.517368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.517576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.517586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.517912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.517922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.518311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.518321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.518658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.518668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.518978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.518989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.519166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.519176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.519509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.519519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.519819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.519830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.520017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.520029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.520350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.520361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.520667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.520677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.520956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.520967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.521177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.521187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.521500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.521510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.521702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.521713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.522076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-11-20 08:31:21.522087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-11-20 08:31:21.522257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.113 [2024-11-20 08:31:21.522267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.113 qpair failed and we were unable to recover it. 00:34:17.113 [2024-11-20 08:31:21.522491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.113 [2024-11-20 08:31:21.522503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.113 qpair failed and we were unable to recover it. 00:34:17.113 [2024-11-20 08:31:21.522697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.113 [2024-11-20 08:31:21.522709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.113 qpair failed and we were unable to recover it. 00:34:17.113 [2024-11-20 08:31:21.523018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.113 [2024-11-20 08:31:21.523029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.113 qpair failed and we were unable to recover it. 00:34:17.113 [2024-11-20 08:31:21.523194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.113 [2024-11-20 08:31:21.523204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.113 qpair failed and we were unable to recover it. 00:34:17.113 [2024-11-20 08:31:21.523396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.113 [2024-11-20 08:31:21.523406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.113 qpair failed and we were unable to recover it. 00:34:17.113 [2024-11-20 08:31:21.523720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.113 [2024-11-20 08:31:21.523730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.113 qpair failed and we were unable to recover it. 00:34:17.113 [2024-11-20 08:31:21.524042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.113 [2024-11-20 08:31:21.524053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.113 qpair failed and we were unable to recover it. 00:34:17.113 [2024-11-20 08:31:21.524347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.113 [2024-11-20 08:31:21.524358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.113 qpair failed and we were unable to recover it. 00:34:17.113 [2024-11-20 08:31:21.524631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.113 [2024-11-20 08:31:21.524641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.113 qpair failed and we were unable to recover it. 00:34:17.113 [2024-11-20 08:31:21.524832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.113 [2024-11-20 08:31:21.524842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.113 qpair failed and we were unable to recover it. 00:34:17.113 [2024-11-20 08:31:21.525180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.113 [2024-11-20 08:31:21.525191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.113 qpair failed and we were unable to recover it. 00:34:17.113 [2024-11-20 08:31:21.525366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.113 [2024-11-20 08:31:21.525376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.113 qpair failed and we were unable to recover it. 00:34:17.113 [2024-11-20 08:31:21.525589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.113 [2024-11-20 08:31:21.525599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.525785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.525795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.526008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.526019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.526185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.526195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.526474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.526485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.526657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.526667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.526837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.526847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.527172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.527182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.527503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.527513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.527823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.527834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.528011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.528022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.528395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.528406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.528718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.528729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.529046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.529062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.529402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.529413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.529624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.529634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.529924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.529935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.530265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.530275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.530599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.530609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.530951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.530961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.531275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.531287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.531603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.531615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.531828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.531838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.532116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.532127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.532473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.532483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.532840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.532851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.533245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.533257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.533560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.533571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.533890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.533901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.533948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.533958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.534152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.534163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.114 [2024-11-20 08:31:21.534211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.114 [2024-11-20 08:31:21.534221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.114 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.534467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.534477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.534804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.534813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.535121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.535132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.535531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.535541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.535711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.535721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.536016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.536026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.536203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.536213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.536541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.536551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.536912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.536923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.537229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.537240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.537528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.537537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.537878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.537889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.538264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.538274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.538580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.538589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.538932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.538943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.539168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.539178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.539510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.539520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.539841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.539850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.540158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.540169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.540409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.540419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.540734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.540744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.540989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.540999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.541179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.541190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.541493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.541502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.541693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.541703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.541895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.541905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.542333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.542344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.542552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.542561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.542859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.542873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.543048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.543058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.543387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.115 [2024-11-20 08:31:21.543398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.115 qpair failed and we were unable to recover it. 00:34:17.115 [2024-11-20 08:31:21.543717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.543727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.543945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.543955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.544307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.544317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.544735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.544745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.544923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.544934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.545301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.545311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.545517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.545526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.545746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.545756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.545938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.545947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.546038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.546049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.546256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.546266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.546578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.546589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.546920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.546930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.547018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.547027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.547340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.547349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.547727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.547736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.547809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.547818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.548137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.548147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.548354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.548366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.548411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.548421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.548613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.548624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.548914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.548926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.549155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.549164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.549672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.549681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.549722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.549732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.550054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.550064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.550257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.550266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.550626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.550635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.550875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.550885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.551085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.551095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.116 [2024-11-20 08:31:21.551402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.116 [2024-11-20 08:31:21.551412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.116 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.551573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.551583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.551981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.551992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.552252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.552262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.552368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.552377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.552530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.552540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.552742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.552753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.552811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.552820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.553188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.553198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.553526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.553536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.553816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.553825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.554165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.554175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.554585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.554595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.554922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.554933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.555126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.555136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.555457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.555466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.555805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.555815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.556136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.556146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.556327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.556337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.556644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.556654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.556949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.556960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.557315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.557325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.557681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.557690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.558038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.558048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.558312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.558322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.558524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.558534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.558751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.558760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.559074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.559084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.559479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.559489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.559695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.559708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.559880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.559890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.117 qpair failed and we were unable to recover it. 00:34:17.117 [2024-11-20 08:31:21.560205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.117 [2024-11-20 08:31:21.560214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.560436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.560446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.560783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.560793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.561026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.561037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.561233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.561242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.561597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.561607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.561922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.561935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.562160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.562170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.562379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.562390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.562560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.562571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.562759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.562768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.562865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.562875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.563178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.563189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.563521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.563531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.563860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.563874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.563928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.563937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.564253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.564262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.564596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.564606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.564897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.564907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.565262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.565271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.565513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.565523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.565883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.565894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.566247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.566258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.566589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.566598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.566920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.566931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.567130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.567141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.567351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.567361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.567436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.567446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.567638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.567648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.567929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.567940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.568278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.568287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.568630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.568640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.118 [2024-11-20 08:31:21.568969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.118 [2024-11-20 08:31:21.568979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.118 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.569162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.569172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.569393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.569403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.569618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.569627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.569962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.569972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.570259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.570278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.570444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.570454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.570765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.570775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.570915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.570925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.571142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.571152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.571477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.571487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.571653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.571663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.571976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.571986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.572189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.572199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.572426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.572437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.572830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.572840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:17.119 [2024-11-20 08:31:21.573167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.573179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:34:17.119 [2024-11-20 08:31:21.573499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.573510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.573823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:17.119 [2024-11-20 08:31:21.573834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:17.119 [2024-11-20 08:31:21.574154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.574167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.574334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.119 [2024-11-20 08:31:21.574345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.574547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.574558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.574730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.574741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.575141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.575152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.575346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.119 [2024-11-20 08:31:21.575356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.119 qpair failed and we were unable to recover it. 00:34:17.119 [2024-11-20 08:31:21.575514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.575525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.575722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.575732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.575832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.575841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.576182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.576193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.576396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.576405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.576698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.576708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.577036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.577046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.577217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.577227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.577289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.577300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.577507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.577518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.577845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.577855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.578223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.578234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.578546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.578557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.578880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.578891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.579261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.579272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.579645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.579655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.579940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.579950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.580290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.580300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.580609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.580620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.581010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.581020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.581248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.581258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.581465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.581476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.581822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.581833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.582264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.582274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.582432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.582442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.582686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.582698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.582876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.582887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.583167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.583177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.583352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.583362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.583674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.583684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.584017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-11-20 08:31:21.584027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-11-20 08:31:21.584362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.584372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.584731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.584741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.585072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.585082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.585401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.585413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.585761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.585771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.586088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.586098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.586491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.586500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.586636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.586645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.586951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.586961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.587266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.587277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.587594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.587607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.587800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.587811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.588014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.588026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.588362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.588373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.588623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.588633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.588801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.588810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.589093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.589103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.589312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.589322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.589656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.589666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.590001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.590011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.590324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.590335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.590678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.590690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.590876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.590887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.591188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.591198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.591449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.591459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.591769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.591779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.592085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.592096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.592302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.592312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.592512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.592522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.592741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.592751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.593082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.593093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.593316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.593326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.593541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.593552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-11-20 08:31:21.593738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-11-20 08:31:21.593748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.593974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.593984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.594300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.594310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.594498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.594508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.594740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.594750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.594932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.594944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.595268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.595279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.595596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.595607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.595940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.595950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.596257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.596268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.596470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.596481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.596692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.596705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.597080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.597091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.597279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.597289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.597508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.597518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.597805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.597817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.598118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.598128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.598299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.598309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.598643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.598654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.598970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.598980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.599207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.599218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.599419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.599430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.599629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.599639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.599821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.599831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.599981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.599991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.600190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.600200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.600610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.600620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.600918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.600929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.601200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.601211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.601414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.601424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.601662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.601673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.601820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.601831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.602055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.602065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.602364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.602374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-11-20 08:31:21.602646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-11-20 08:31:21.602655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.602996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.603006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.603326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.603338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.603641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.603651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.604052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.604064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.604307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.604318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.604647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.604658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.604832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.604842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.605187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.605198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.605508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.605518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.605844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.605855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.606194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.606205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.606539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.606550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.606607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.606617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.606916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.606926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.607210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.607221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.607266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.607275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.607609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.607619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.607933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.607943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.608282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.608292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.608604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.608614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.608928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.608939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.609103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.609114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.609454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.609465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.609810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.609820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.610010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.610020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.610353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.610365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.610759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.610770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.611082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.611093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.611260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.611270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.611529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.611539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-11-20 08:31:21.611724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-11-20 08:31:21.611735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.611900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.611910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.612210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.612220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.612597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.612607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.612788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.612799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.612937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.612948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.613122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.613132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.613333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.613344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.613756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.613767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.614055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.614066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.614359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.614371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.124 [2024-11-20 08:31:21.614654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.614666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.614838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.614849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:17.124 [2024-11-20 08:31:21.615182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.615195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.124 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.124 [2024-11-20 08:31:21.615503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.615515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.615819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.615830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.616029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.616040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.616361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.616372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.616536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.616547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.616733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.616744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.616917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.616928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.617237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.617248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.617637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.617648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.617865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.617876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.618187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.618197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.618540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.618550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-11-20 08:31:21.618859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-11-20 08:31:21.618873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.619046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.619055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.619433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.619444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.619590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.619600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.619928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.619938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.619983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.619992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.620128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.620138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.620328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.620338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.620501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.620512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.620839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.620850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.621027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.621037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.621310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.621319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.621632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.621641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.621838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.621851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.622121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.622131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.622452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.622461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.622863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.622875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.623169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.623179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.623468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.623477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.623660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.623669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.623965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.623976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.624251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.624261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.624456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.624466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.624652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.624662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.624752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.624762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.625058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.625068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.625360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.625371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.625683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.625694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.625857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.625869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.626031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.626041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.626128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.626137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-11-20 08:31:21.626462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-11-20 08:31:21.626471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.626790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.626799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.626991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.627001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.627343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.627353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.627541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.627551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.627915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.627925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.628291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.628301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.628594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.628605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.628928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.628939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.629255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.629264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.629554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.629565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.629900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.629911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.630083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.630092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.630403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.630412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.630732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.630741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.631051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.631061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.631368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.631378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.631559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.631571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.632022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.632032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.632349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.632359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.632539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.632549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.632741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.632751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.632796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.632807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.633104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.633116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.633427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.633437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.633763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.633773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.634059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.634070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.634226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.634235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.634438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.634447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.634839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.634850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.635100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.635110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.635321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.635331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.635645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.635655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.635892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.635902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-11-20 08:31:21.636072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-11-20 08:31:21.636082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.636386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.636396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.636600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.636610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.636885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.636898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.637202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.637212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.637497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.637506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.637795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.637804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.638109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.638119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.638449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.638460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.638796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.638806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.639140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.639150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.639465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.639475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.639801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.639811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.639992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.640002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.640342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.640352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.640653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.640663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.640949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.640961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.641274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.641284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.641456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.641465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.641693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.641703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.642062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.642073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.642404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.642414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.642573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.642582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.642779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.642789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.643144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.643154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.643322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.643332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.643705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.643715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.644023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.644033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.644364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.644374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.644669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-11-20 08:31:21.644678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-11-20 08:31:21.644871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.644882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.645108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.645118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.645472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.645483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.645594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.645604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.645934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.645945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.646288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.646298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.646578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.646588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.646767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.646778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.647077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.647088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.647481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.647492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.647779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.647789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.648097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.648107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.648266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.648277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.648465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.648476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.648867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.648878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.649297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.649307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.649583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.649593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.649774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.649784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 Malloc0 00:34:17.128 [2024-11-20 08:31:21.650034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.650045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.650211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.650220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.650462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.650471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.128 [2024-11-20 08:31:21.650788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.650798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.651122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.651132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:17.128 [2024-11-20 08:31:21.651300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.651310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.128 [2024-11-20 08:31:21.651527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.651537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-11-20 08:31:21.651700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.651710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.128 [2024-11-20 08:31:21.652029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-11-20 08:31:21.652039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.652282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.652292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.652617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.652628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.652944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.652954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.653254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.653264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.653436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.653446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.653634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.653644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.653836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.653847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.654190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.654200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.654504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.654515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.654739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.654750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.655074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.655084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.655235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.655245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.655571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.655581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.655899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.655909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.656204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.656214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.656518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.656528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.656743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.656752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.657113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.657123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.657417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:17.129 [2024-11-20 08:31:21.657456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.657465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.657647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.657656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.657837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.657847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.658147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.658158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.658492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.658502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.658830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.658840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.659169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.659180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.659492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.659504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.659812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.659822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.660119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.660129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.660298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.660308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.660583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.660593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.660728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-11-20 08:31:21.660739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-11-20 08:31:21.660987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.660998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.661192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.661203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.661277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.661287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.661418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.661428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.661611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.661622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.661669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.661680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.661858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.661881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.662176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.662186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.662511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.662520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.662851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.662866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.663010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.663020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.663365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.663374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.663425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.663434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.663761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.663771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.663966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.663976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.664164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.664173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.664485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.664495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.664778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.664788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.664967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.664978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.665155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.665165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.665400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.665410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.665730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.665739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.666051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.666061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-11-20 08:31:21.666277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.666287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.130 [2024-11-20 08:31:21.666576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-11-20 08:31:21.666586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:17.131 [2024-11-20 08:31:21.666982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.666993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.131 [2024-11-20 08:31:21.667286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.667296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.131 [2024-11-20 08:31:21.667586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.667597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.667767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.667777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.667832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.667842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.668043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.668053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.668423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.668433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.668724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.668735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.669044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.669057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.669359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.669369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.669560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.669569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.669905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.669915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.670294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.670303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.670509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.670520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.670876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.670886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.671242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.671251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.671581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.671592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.671879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.671890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.672175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.672185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.672492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.672502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.672817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.672828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.673007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.673017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.673311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.673322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.673638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.673649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.673826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.673836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.674083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.674094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.674418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.674428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.674620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.674630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.674962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.674972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.675373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.675383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.675537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.675546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.675929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.675940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-11-20 08:31:21.676270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-11-20 08:31:21.676279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.676616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.676626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.676936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.676946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.677257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.677270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.677486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.677495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.677801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.677812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.678121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.678132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.678208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.678217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.678382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.678392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.132 [2024-11-20 08:31:21.678719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.678729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.678897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:17.132 [2024-11-20 08:31:21.678908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.679189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.679199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.132 [2024-11-20 08:31:21.679381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.679399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.132 [2024-11-20 08:31:21.679710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.679720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.680119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.680130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.680346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.680358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.680678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.680688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.681006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.681017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.681343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.681353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.681628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.681639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.681833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.681844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.682103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.682114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.682463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.682473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.682672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.682682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.682846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.682857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.683142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.683152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.683396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-11-20 08:31:21.683406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-11-20 08:31:21.683715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.683726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.683901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.683912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.684220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.684229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.684559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.684570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.684744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.684755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.685067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.685077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.685239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.685249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.685572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.685582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.685762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.685772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.686101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.686112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.686433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.686444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.686764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.686775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.687079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.687089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.687383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.687392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.687718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.687728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.687981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.687993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.688292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.688303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.688489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.688500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.688820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.688831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.689139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.689149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.689468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.689479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.689767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.689777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.690084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.690096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.690279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.690289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.133 [2024-11-20 08:31:21.690599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.690609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.133 [2024-11-20 08:31:21.690915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.690925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.133 [2024-11-20 08:31:21.691234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.691244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.691405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.691419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.133 [2024-11-20 08:31:21.691691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.691701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.692014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.692024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.692266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-11-20 08:31:21.692277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-11-20 08:31:21.692567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.692577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.692756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.692767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.693056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.693066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.693363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.693373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.693537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.693548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.693875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.693885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.694244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.694254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.694551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.694560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.694871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.694881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.695262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.695272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.695460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.695470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.695798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.695809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.696202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.696212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.696565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.696574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.696869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.696879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.697181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.697191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.697395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-11-20 08:31:21.697405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f490 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.697679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.134 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.134 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:17.134 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.134 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.134 [2024-11-20 08:31:21.708404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.134 [2024-11-20 08:31:21.708489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.134 [2024-11-20 08:31:21.708507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.134 [2024-11-20 08:31:21.708515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.134 [2024-11-20 08:31:21.708522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.134 [2024-11-20 08:31:21.708542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.134 08:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2187264 00:34:17.134 [2024-11-20 08:31:21.718211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.134 [2024-11-20 08:31:21.718274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.134 [2024-11-20 08:31:21.718289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.134 [2024-11-20 08:31:21.718296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.134 [2024-11-20 08:31:21.718303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.134 [2024-11-20 08:31:21.718317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.728285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.134 [2024-11-20 08:31:21.728343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.134 [2024-11-20 08:31:21.728357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.134 [2024-11-20 08:31:21.728364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.134 [2024-11-20 08:31:21.728370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.134 [2024-11-20 08:31:21.728384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.738336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.134 [2024-11-20 08:31:21.738397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.134 [2024-11-20 08:31:21.738410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.134 [2024-11-20 08:31:21.738417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.134 [2024-11-20 08:31:21.738423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.134 [2024-11-20 08:31:21.738436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-11-20 08:31:21.748308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.134 [2024-11-20 08:31:21.748366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.135 [2024-11-20 08:31:21.748379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.135 [2024-11-20 08:31:21.748386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.135 [2024-11-20 08:31:21.748392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.135 [2024-11-20 08:31:21.748406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-11-20 08:31:21.758287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.135 [2024-11-20 08:31:21.758339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.135 [2024-11-20 08:31:21.758356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.135 [2024-11-20 08:31:21.758363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.135 [2024-11-20 08:31:21.758369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.135 [2024-11-20 08:31:21.758383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-11-20 08:31:21.768318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.135 [2024-11-20 08:31:21.768378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.135 [2024-11-20 08:31:21.768391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.135 [2024-11-20 08:31:21.768398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.135 [2024-11-20 08:31:21.768405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.135 [2024-11-20 08:31:21.768418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-11-20 08:31:21.778316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.135 [2024-11-20 08:31:21.778374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.135 [2024-11-20 08:31:21.778388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.135 [2024-11-20 08:31:21.778396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.135 [2024-11-20 08:31:21.778402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.135 [2024-11-20 08:31:21.778416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-11-20 08:31:21.788414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.135 [2024-11-20 08:31:21.788497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.135 [2024-11-20 08:31:21.788511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.135 [2024-11-20 08:31:21.788518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.135 [2024-11-20 08:31:21.788524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.135 [2024-11-20 08:31:21.788537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-11-20 08:31:21.798426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.135 [2024-11-20 08:31:21.798477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.135 [2024-11-20 08:31:21.798491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.135 [2024-11-20 08:31:21.798497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.135 [2024-11-20 08:31:21.798510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.135 [2024-11-20 08:31:21.798523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-11-20 08:31:21.808376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.135 [2024-11-20 08:31:21.808425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.135 [2024-11-20 08:31:21.808440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.135 [2024-11-20 08:31:21.808447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.135 [2024-11-20 08:31:21.808454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.135 [2024-11-20 08:31:21.808468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-11-20 08:31:21.818338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.135 [2024-11-20 08:31:21.818396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.135 [2024-11-20 08:31:21.818410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.135 [2024-11-20 08:31:21.818417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.135 [2024-11-20 08:31:21.818423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.135 [2024-11-20 08:31:21.818436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-11-20 08:31:21.828539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.135 [2024-11-20 08:31:21.828600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.135 [2024-11-20 08:31:21.828613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.135 [2024-11-20 08:31:21.828620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.135 [2024-11-20 08:31:21.828626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.136 [2024-11-20 08:31:21.828639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.397 [2024-11-20 08:31:21.838392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.397 [2024-11-20 08:31:21.838458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.397 [2024-11-20 08:31:21.838473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.398 [2024-11-20 08:31:21.838481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.398 [2024-11-20 08:31:21.838488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.398 [2024-11-20 08:31:21.838504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.398 qpair failed and we were unable to recover it. 00:34:17.398 [2024-11-20 08:31:21.848538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.398 [2024-11-20 08:31:21.848597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.398 [2024-11-20 08:31:21.848611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.398 [2024-11-20 08:31:21.848618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.398 [2024-11-20 08:31:21.848624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.398 [2024-11-20 08:31:21.848638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.398 qpair failed and we were unable to recover it. 00:34:17.398 [2024-11-20 08:31:21.858527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.398 [2024-11-20 08:31:21.858592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.398 [2024-11-20 08:31:21.858618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.398 [2024-11-20 08:31:21.858626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.398 [2024-11-20 08:31:21.858633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.398 [2024-11-20 08:31:21.858652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.398 qpair failed and we were unable to recover it. 00:34:17.398 [2024-11-20 08:31:21.868504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.398 [2024-11-20 08:31:21.868558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.398 [2024-11-20 08:31:21.868574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.398 [2024-11-20 08:31:21.868581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.398 [2024-11-20 08:31:21.868587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.398 [2024-11-20 08:31:21.868602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.398 qpair failed and we were unable to recover it. 00:34:17.398 [2024-11-20 08:31:21.878529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.398 [2024-11-20 08:31:21.878610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.398 [2024-11-20 08:31:21.878623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.398 [2024-11-20 08:31:21.878630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.398 [2024-11-20 08:31:21.878637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.398 [2024-11-20 08:31:21.878650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.398 qpair failed and we were unable to recover it. 00:34:17.398 [2024-11-20 08:31:21.888654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.398 [2024-11-20 08:31:21.888712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.398 [2024-11-20 08:31:21.888742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.398 [2024-11-20 08:31:21.888751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.398 [2024-11-20 08:31:21.888758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.398 [2024-11-20 08:31:21.888777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.398 qpair failed and we were unable to recover it. 00:34:17.398 [2024-11-20 08:31:21.898746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.398 [2024-11-20 08:31:21.898803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.398 [2024-11-20 08:31:21.898819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.398 [2024-11-20 08:31:21.898826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.398 [2024-11-20 08:31:21.898833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.398 [2024-11-20 08:31:21.898847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.398 qpair failed and we were unable to recover it. 00:34:17.398 [2024-11-20 08:31:21.908732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.398 [2024-11-20 08:31:21.908793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.398 [2024-11-20 08:31:21.908807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.398 [2024-11-20 08:31:21.908814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.398 [2024-11-20 08:31:21.908820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.398 [2024-11-20 08:31:21.908834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.398 qpair failed and we were unable to recover it. 00:34:17.398 [2024-11-20 08:31:21.918621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.398 [2024-11-20 08:31:21.918676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.398 [2024-11-20 08:31:21.918692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.398 [2024-11-20 08:31:21.918699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.398 [2024-11-20 08:31:21.918705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.398 [2024-11-20 08:31:21.918720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.398 qpair failed and we were unable to recover it. 00:34:17.398 [2024-11-20 08:31:21.928959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.398 [2024-11-20 08:31:21.929055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.398 [2024-11-20 08:31:21.929069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.398 [2024-11-20 08:31:21.929076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.398 [2024-11-20 08:31:21.929086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.398 [2024-11-20 08:31:21.929100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.398 qpair failed and we were unable to recover it. 00:34:17.398 [2024-11-20 08:31:21.938842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.398 [2024-11-20 08:31:21.938911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.398 [2024-11-20 08:31:21.938925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.398 [2024-11-20 08:31:21.938932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.398 [2024-11-20 08:31:21.938938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.398 [2024-11-20 08:31:21.938952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.399 qpair failed and we were unable to recover it. 00:34:17.399 [2024-11-20 08:31:21.948886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.399 [2024-11-20 08:31:21.948942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.399 [2024-11-20 08:31:21.948956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.399 [2024-11-20 08:31:21.948963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.399 [2024-11-20 08:31:21.948969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.399 [2024-11-20 08:31:21.948983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.399 qpair failed and we were unable to recover it. 00:34:17.399 [2024-11-20 08:31:21.958899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.399 [2024-11-20 08:31:21.958954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.399 [2024-11-20 08:31:21.958967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.399 [2024-11-20 08:31:21.958974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.399 [2024-11-20 08:31:21.958980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.399 [2024-11-20 08:31:21.958994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.399 qpair failed and we were unable to recover it. 00:34:17.399 [2024-11-20 08:31:21.968746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.399 [2024-11-20 08:31:21.968843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.399 [2024-11-20 08:31:21.968855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.399 [2024-11-20 08:31:21.968867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.399 [2024-11-20 08:31:21.968874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.399 [2024-11-20 08:31:21.968888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.399 qpair failed and we were unable to recover it. 00:34:17.399 [2024-11-20 08:31:21.978910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.399 [2024-11-20 08:31:21.978966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.399 [2024-11-20 08:31:21.978981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.399 [2024-11-20 08:31:21.978988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.399 [2024-11-20 08:31:21.978994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.399 [2024-11-20 08:31:21.979008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.399 qpair failed and we were unable to recover it. 00:34:17.399 [2024-11-20 08:31:21.988956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.399 [2024-11-20 08:31:21.989012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.399 [2024-11-20 08:31:21.989026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.399 [2024-11-20 08:31:21.989034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.399 [2024-11-20 08:31:21.989040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.399 [2024-11-20 08:31:21.989054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.399 qpair failed and we were unable to recover it. 00:34:17.399 [2024-11-20 08:31:21.998972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.399 [2024-11-20 08:31:21.999029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.399 [2024-11-20 08:31:21.999042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.399 [2024-11-20 08:31:21.999050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.399 [2024-11-20 08:31:21.999056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.399 [2024-11-20 08:31:21.999070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.399 qpair failed and we were unable to recover it. 00:34:17.399 [2024-11-20 08:31:22.008961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.399 [2024-11-20 08:31:22.009015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.399 [2024-11-20 08:31:22.009029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.399 [2024-11-20 08:31:22.009036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.399 [2024-11-20 08:31:22.009042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.399 [2024-11-20 08:31:22.009056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.399 qpair failed and we were unable to recover it. 00:34:17.399 [2024-11-20 08:31:22.019031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.399 [2024-11-20 08:31:22.019085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.399 [2024-11-20 08:31:22.019102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.399 [2024-11-20 08:31:22.019109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.399 [2024-11-20 08:31:22.019116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.399 [2024-11-20 08:31:22.019129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.399 qpair failed and we were unable to recover it. 00:34:17.399 [2024-11-20 08:31:22.029065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.399 [2024-11-20 08:31:22.029164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.399 [2024-11-20 08:31:22.029177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.399 [2024-11-20 08:31:22.029184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.399 [2024-11-20 08:31:22.029190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.399 [2024-11-20 08:31:22.029204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.399 qpair failed and we were unable to recover it. 00:34:17.399 [2024-11-20 08:31:22.039098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.399 [2024-11-20 08:31:22.039145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.399 [2024-11-20 08:31:22.039158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.399 [2024-11-20 08:31:22.039165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.399 [2024-11-20 08:31:22.039171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.399 [2024-11-20 08:31:22.039184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.399 qpair failed and we were unable to recover it. 00:34:17.399 [2024-11-20 08:31:22.049103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.399 [2024-11-20 08:31:22.049161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.399 [2024-11-20 08:31:22.049174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.399 [2024-11-20 08:31:22.049181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.399 [2024-11-20 08:31:22.049188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.399 [2024-11-20 08:31:22.049201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.399 qpair failed and we were unable to recover it. 00:34:17.399 [2024-11-20 08:31:22.059029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.399 [2024-11-20 08:31:22.059089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.399 [2024-11-20 08:31:22.059102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.399 [2024-11-20 08:31:22.059108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.400 [2024-11-20 08:31:22.059118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.400 [2024-11-20 08:31:22.059132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.400 qpair failed and we were unable to recover it. 00:34:17.400 [2024-11-20 08:31:22.069069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.400 [2024-11-20 08:31:22.069125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.400 [2024-11-20 08:31:22.069139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.400 [2024-11-20 08:31:22.069146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.400 [2024-11-20 08:31:22.069152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.400 [2024-11-20 08:31:22.069166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.400 qpair failed and we were unable to recover it. 00:34:17.400 [2024-11-20 08:31:22.079200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.400 [2024-11-20 08:31:22.079250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.400 [2024-11-20 08:31:22.079263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.400 [2024-11-20 08:31:22.079270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.400 [2024-11-20 08:31:22.079276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.400 [2024-11-20 08:31:22.079290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.400 qpair failed and we were unable to recover it. 00:34:17.400 [2024-11-20 08:31:22.089275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.400 [2024-11-20 08:31:22.089341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.400 [2024-11-20 08:31:22.089353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.400 [2024-11-20 08:31:22.089360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.400 [2024-11-20 08:31:22.089367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.400 [2024-11-20 08:31:22.089380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.400 qpair failed and we were unable to recover it. 00:34:17.400 [2024-11-20 08:31:22.099237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.400 [2024-11-20 08:31:22.099295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.400 [2024-11-20 08:31:22.099309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.400 [2024-11-20 08:31:22.099316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.400 [2024-11-20 08:31:22.099322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.400 [2024-11-20 08:31:22.099335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.400 qpair failed and we were unable to recover it. 00:34:17.400 [2024-11-20 08:31:22.109185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.400 [2024-11-20 08:31:22.109257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.400 [2024-11-20 08:31:22.109270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.400 [2024-11-20 08:31:22.109276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.400 [2024-11-20 08:31:22.109283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.400 [2024-11-20 08:31:22.109296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.400 qpair failed and we were unable to recover it. 00:34:17.400 [2024-11-20 08:31:22.119325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.400 [2024-11-20 08:31:22.119408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.400 [2024-11-20 08:31:22.119421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.400 [2024-11-20 08:31:22.119428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.400 [2024-11-20 08:31:22.119434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.400 [2024-11-20 08:31:22.119447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.400 qpair failed and we were unable to recover it. 00:34:17.662 [2024-11-20 08:31:22.129255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.662 [2024-11-20 08:31:22.129308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.662 [2024-11-20 08:31:22.129321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.662 [2024-11-20 08:31:22.129328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.662 [2024-11-20 08:31:22.129334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.662 [2024-11-20 08:31:22.129348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.662 qpair failed and we were unable to recover it. 00:34:17.662 [2024-11-20 08:31:22.139394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.662 [2024-11-20 08:31:22.139447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.662 [2024-11-20 08:31:22.139459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.662 [2024-11-20 08:31:22.139466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.662 [2024-11-20 08:31:22.139473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.662 [2024-11-20 08:31:22.139486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.662 qpair failed and we were unable to recover it. 00:34:17.662 [2024-11-20 08:31:22.149415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.662 [2024-11-20 08:31:22.149471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.662 [2024-11-20 08:31:22.149487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.662 [2024-11-20 08:31:22.149494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.662 [2024-11-20 08:31:22.149500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.662 [2024-11-20 08:31:22.149514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.662 qpair failed and we were unable to recover it. 00:34:17.662 [2024-11-20 08:31:22.159432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.662 [2024-11-20 08:31:22.159491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.662 [2024-11-20 08:31:22.159504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.662 [2024-11-20 08:31:22.159511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.662 [2024-11-20 08:31:22.159518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.662 [2024-11-20 08:31:22.159531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.662 qpair failed and we were unable to recover it. 00:34:17.662 [2024-11-20 08:31:22.169411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.662 [2024-11-20 08:31:22.169463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.662 [2024-11-20 08:31:22.169476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.662 [2024-11-20 08:31:22.169483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.662 [2024-11-20 08:31:22.169490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.662 [2024-11-20 08:31:22.169503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.662 qpair failed and we were unable to recover it. 00:34:17.662 [2024-11-20 08:31:22.179473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.662 [2024-11-20 08:31:22.179534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.662 [2024-11-20 08:31:22.179548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.662 [2024-11-20 08:31:22.179555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.662 [2024-11-20 08:31:22.179561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.663 [2024-11-20 08:31:22.179574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.663 qpair failed and we were unable to recover it. 00:34:17.663 [2024-11-20 08:31:22.189425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.663 [2024-11-20 08:31:22.189480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.663 [2024-11-20 08:31:22.189495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.663 [2024-11-20 08:31:22.189502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.663 [2024-11-20 08:31:22.189512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.663 [2024-11-20 08:31:22.189526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.663 qpair failed and we were unable to recover it. 00:34:17.663 [2024-11-20 08:31:22.199423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.663 [2024-11-20 08:31:22.199477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.663 [2024-11-20 08:31:22.199490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.663 [2024-11-20 08:31:22.199497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.663 [2024-11-20 08:31:22.199503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.663 [2024-11-20 08:31:22.199517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.663 qpair failed and we were unable to recover it. 00:34:17.663 [2024-11-20 08:31:22.209570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.663 [2024-11-20 08:31:22.209622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.663 [2024-11-20 08:31:22.209634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.663 [2024-11-20 08:31:22.209641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.663 [2024-11-20 08:31:22.209648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.663 [2024-11-20 08:31:22.209661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.663 qpair failed and we were unable to recover it. 00:34:17.663 [2024-11-20 08:31:22.219602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.663 [2024-11-20 08:31:22.219662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.663 [2024-11-20 08:31:22.219688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.663 [2024-11-20 08:31:22.219696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.663 [2024-11-20 08:31:22.219703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.663 [2024-11-20 08:31:22.219723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.663 qpair failed and we were unable to recover it. 00:34:17.663 [2024-11-20 08:31:22.229629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.663 [2024-11-20 08:31:22.229683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.663 [2024-11-20 08:31:22.229698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.663 [2024-11-20 08:31:22.229706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.663 [2024-11-20 08:31:22.229712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.663 [2024-11-20 08:31:22.229726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.663 qpair failed and we were unable to recover it. 00:34:17.663 [2024-11-20 08:31:22.239657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.663 [2024-11-20 08:31:22.239711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.663 [2024-11-20 08:31:22.239724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.663 [2024-11-20 08:31:22.239732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.663 [2024-11-20 08:31:22.239738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.663 [2024-11-20 08:31:22.239752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.663 qpair failed and we were unable to recover it. 00:34:17.663 [2024-11-20 08:31:22.249558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.663 [2024-11-20 08:31:22.249608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.663 [2024-11-20 08:31:22.249621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.663 [2024-11-20 08:31:22.249628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.663 [2024-11-20 08:31:22.249635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.663 [2024-11-20 08:31:22.249648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.663 qpair failed and we were unable to recover it. 00:34:17.663 [2024-11-20 08:31:22.259732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.663 [2024-11-20 08:31:22.259787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.663 [2024-11-20 08:31:22.259800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.663 [2024-11-20 08:31:22.259806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.663 [2024-11-20 08:31:22.259813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.663 [2024-11-20 08:31:22.259827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.663 qpair failed and we were unable to recover it. 00:34:17.663 [2024-11-20 08:31:22.269706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.663 [2024-11-20 08:31:22.269764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.663 [2024-11-20 08:31:22.269779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.663 [2024-11-20 08:31:22.269786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.663 [2024-11-20 08:31:22.269792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.663 [2024-11-20 08:31:22.269810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.663 qpair failed and we were unable to recover it. 00:34:17.663 [2024-11-20 08:31:22.279765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.663 [2024-11-20 08:31:22.279815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.663 [2024-11-20 08:31:22.279833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.663 [2024-11-20 08:31:22.279840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.663 [2024-11-20 08:31:22.279846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.663 [2024-11-20 08:31:22.279860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.663 qpair failed and we were unable to recover it. 00:34:17.663 [2024-11-20 08:31:22.289792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.663 [2024-11-20 08:31:22.289844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.663 [2024-11-20 08:31:22.289857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.663 [2024-11-20 08:31:22.289869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.663 [2024-11-20 08:31:22.289876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.663 [2024-11-20 08:31:22.289890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.663 qpair failed and we were unable to recover it. 00:34:17.663 [2024-11-20 08:31:22.299704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.664 [2024-11-20 08:31:22.299756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.664 [2024-11-20 08:31:22.299769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.664 [2024-11-20 08:31:22.299776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.664 [2024-11-20 08:31:22.299783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.664 [2024-11-20 08:31:22.299796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.664 qpair failed and we were unable to recover it. 00:34:17.664 [2024-11-20 08:31:22.309868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.664 [2024-11-20 08:31:22.309934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.664 [2024-11-20 08:31:22.309948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.664 [2024-11-20 08:31:22.309955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.664 [2024-11-20 08:31:22.309962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.664 [2024-11-20 08:31:22.309975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.664 qpair failed and we were unable to recover it. 00:34:17.664 [2024-11-20 08:31:22.319844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.664 [2024-11-20 08:31:22.319899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.664 [2024-11-20 08:31:22.319912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.664 [2024-11-20 08:31:22.319919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.664 [2024-11-20 08:31:22.319928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.664 [2024-11-20 08:31:22.319942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.664 qpair failed and we were unable to recover it. 00:34:17.664 [2024-11-20 08:31:22.329901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.664 [2024-11-20 08:31:22.329950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.664 [2024-11-20 08:31:22.329963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.664 [2024-11-20 08:31:22.329970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.664 [2024-11-20 08:31:22.329976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.664 [2024-11-20 08:31:22.329990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.664 qpair failed and we were unable to recover it. 00:34:17.664 [2024-11-20 08:31:22.339933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.664 [2024-11-20 08:31:22.340016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.664 [2024-11-20 08:31:22.340029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.664 [2024-11-20 08:31:22.340037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.664 [2024-11-20 08:31:22.340043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.664 [2024-11-20 08:31:22.340056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.664 qpair failed and we were unable to recover it. 00:34:17.664 [2024-11-20 08:31:22.349974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.664 [2024-11-20 08:31:22.350028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.664 [2024-11-20 08:31:22.350041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.664 [2024-11-20 08:31:22.350048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.664 [2024-11-20 08:31:22.350054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.664 [2024-11-20 08:31:22.350068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.664 qpair failed and we were unable to recover it. 00:34:17.664 [2024-11-20 08:31:22.359892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.664 [2024-11-20 08:31:22.359945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.664 [2024-11-20 08:31:22.359959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.664 [2024-11-20 08:31:22.359966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.664 [2024-11-20 08:31:22.359972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.664 [2024-11-20 08:31:22.359986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.664 qpair failed and we were unable to recover it. 00:34:17.664 [2024-11-20 08:31:22.369891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.664 [2024-11-20 08:31:22.369952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.664 [2024-11-20 08:31:22.369965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.664 [2024-11-20 08:31:22.369972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.664 [2024-11-20 08:31:22.369979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.664 [2024-11-20 08:31:22.369992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.664 qpair failed and we were unable to recover it. 00:34:17.664 [2024-11-20 08:31:22.380038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.664 [2024-11-20 08:31:22.380100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.664 [2024-11-20 08:31:22.380113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.664 [2024-11-20 08:31:22.380120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.664 [2024-11-20 08:31:22.380127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.664 [2024-11-20 08:31:22.380142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.664 qpair failed and we were unable to recover it. 00:34:17.926 [2024-11-20 08:31:22.390088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.926 [2024-11-20 08:31:22.390141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.926 [2024-11-20 08:31:22.390154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.926 [2024-11-20 08:31:22.390161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.926 [2024-11-20 08:31:22.390168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.926 [2024-11-20 08:31:22.390181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.926 qpair failed and we were unable to recover it. 00:34:17.926 [2024-11-20 08:31:22.400120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.926 [2024-11-20 08:31:22.400173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.926 [2024-11-20 08:31:22.400188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.926 [2024-11-20 08:31:22.400195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.926 [2024-11-20 08:31:22.400201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.926 [2024-11-20 08:31:22.400216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.926 qpair failed and we were unable to recover it. 00:34:17.926 [2024-11-20 08:31:22.410061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.926 [2024-11-20 08:31:22.410157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.926 [2024-11-20 08:31:22.410173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.926 [2024-11-20 08:31:22.410180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.926 [2024-11-20 08:31:22.410186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.926 [2024-11-20 08:31:22.410200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.926 qpair failed and we were unable to recover it. 00:34:17.926 [2024-11-20 08:31:22.420165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.926 [2024-11-20 08:31:22.420218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.926 [2024-11-20 08:31:22.420231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.926 [2024-11-20 08:31:22.420238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.926 [2024-11-20 08:31:22.420244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.926 [2024-11-20 08:31:22.420258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.926 qpair failed and we were unable to recover it. 00:34:17.926 [2024-11-20 08:31:22.430194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.926 [2024-11-20 08:31:22.430251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.926 [2024-11-20 08:31:22.430264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.926 [2024-11-20 08:31:22.430271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.926 [2024-11-20 08:31:22.430277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.926 [2024-11-20 08:31:22.430291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.926 qpair failed and we were unable to recover it. 00:34:17.926 [2024-11-20 08:31:22.440176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.926 [2024-11-20 08:31:22.440223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.926 [2024-11-20 08:31:22.440236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.926 [2024-11-20 08:31:22.440242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.926 [2024-11-20 08:31:22.440249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.926 [2024-11-20 08:31:22.440262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.926 qpair failed and we were unable to recover it. 00:34:17.926 [2024-11-20 08:31:22.450097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.926 [2024-11-20 08:31:22.450151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.926 [2024-11-20 08:31:22.450164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.926 [2024-11-20 08:31:22.450171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.926 [2024-11-20 08:31:22.450177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.926 [2024-11-20 08:31:22.450198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.926 qpair failed and we were unable to recover it. 00:34:17.926 [2024-11-20 08:31:22.460269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.926 [2024-11-20 08:31:22.460323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.926 [2024-11-20 08:31:22.460335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.926 [2024-11-20 08:31:22.460342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.926 [2024-11-20 08:31:22.460349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.926 [2024-11-20 08:31:22.460362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.926 qpair failed and we were unable to recover it. 00:34:17.926 [2024-11-20 08:31:22.470288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.926 [2024-11-20 08:31:22.470347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.926 [2024-11-20 08:31:22.470359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.926 [2024-11-20 08:31:22.470366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.926 [2024-11-20 08:31:22.470373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.926 [2024-11-20 08:31:22.470386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.926 qpair failed and we were unable to recover it. 00:34:17.926 [2024-11-20 08:31:22.480304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.926 [2024-11-20 08:31:22.480354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.926 [2024-11-20 08:31:22.480367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.926 [2024-11-20 08:31:22.480374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.926 [2024-11-20 08:31:22.480380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.927 [2024-11-20 08:31:22.480393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.927 qpair failed and we were unable to recover it. 00:34:17.927 [2024-11-20 08:31:22.490215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.927 [2024-11-20 08:31:22.490282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.927 [2024-11-20 08:31:22.490295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.927 [2024-11-20 08:31:22.490302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.927 [2024-11-20 08:31:22.490309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.927 [2024-11-20 08:31:22.490322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.927 qpair failed and we were unable to recover it. 00:34:17.927 [2024-11-20 08:31:22.500379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.927 [2024-11-20 08:31:22.500441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.927 [2024-11-20 08:31:22.500454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.927 [2024-11-20 08:31:22.500461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.927 [2024-11-20 08:31:22.500467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.927 [2024-11-20 08:31:22.500480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.927 qpair failed and we were unable to recover it. 00:34:17.927 [2024-11-20 08:31:22.510407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.927 [2024-11-20 08:31:22.510463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.927 [2024-11-20 08:31:22.510476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.927 [2024-11-20 08:31:22.510483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.927 [2024-11-20 08:31:22.510489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.927 [2024-11-20 08:31:22.510502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.927 qpair failed and we were unable to recover it. 00:34:17.927 [2024-11-20 08:31:22.520402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.927 [2024-11-20 08:31:22.520453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.927 [2024-11-20 08:31:22.520466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.927 [2024-11-20 08:31:22.520472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.927 [2024-11-20 08:31:22.520479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.927 [2024-11-20 08:31:22.520492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.927 qpair failed and we were unable to recover it. 00:34:17.927 [2024-11-20 08:31:22.530450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.927 [2024-11-20 08:31:22.530502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.927 [2024-11-20 08:31:22.530515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.927 [2024-11-20 08:31:22.530522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.927 [2024-11-20 08:31:22.530528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.927 [2024-11-20 08:31:22.530542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.927 qpair failed and we were unable to recover it. 00:34:17.927 [2024-11-20 08:31:22.540480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.927 [2024-11-20 08:31:22.540532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.927 [2024-11-20 08:31:22.540548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.927 [2024-11-20 08:31:22.540555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.927 [2024-11-20 08:31:22.540561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.927 [2024-11-20 08:31:22.540575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.927 qpair failed and we were unable to recover it. 00:34:17.927 [2024-11-20 08:31:22.550555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.927 [2024-11-20 08:31:22.550643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.927 [2024-11-20 08:31:22.550656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.927 [2024-11-20 08:31:22.550663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.927 [2024-11-20 08:31:22.550669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.927 [2024-11-20 08:31:22.550682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.927 qpair failed and we were unable to recover it. 00:34:17.927 [2024-11-20 08:31:22.560407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.927 [2024-11-20 08:31:22.560459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.927 [2024-11-20 08:31:22.560473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.927 [2024-11-20 08:31:22.560480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.927 [2024-11-20 08:31:22.560486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.927 [2024-11-20 08:31:22.560499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.927 qpair failed and we were unable to recover it. 00:34:17.927 [2024-11-20 08:31:22.570557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.927 [2024-11-20 08:31:22.570607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.927 [2024-11-20 08:31:22.570620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.927 [2024-11-20 08:31:22.570627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.927 [2024-11-20 08:31:22.570634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.927 [2024-11-20 08:31:22.570647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.927 qpair failed and we were unable to recover it. 00:34:17.927 [2024-11-20 08:31:22.580470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.927 [2024-11-20 08:31:22.580522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.927 [2024-11-20 08:31:22.580535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.927 [2024-11-20 08:31:22.580542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.927 [2024-11-20 08:31:22.580548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.927 [2024-11-20 08:31:22.580565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.927 qpair failed and we were unable to recover it. 00:34:17.927 [2024-11-20 08:31:22.590628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.927 [2024-11-20 08:31:22.590680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.927 [2024-11-20 08:31:22.590693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.927 [2024-11-20 08:31:22.590700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.927 [2024-11-20 08:31:22.590706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.927 [2024-11-20 08:31:22.590720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.927 qpair failed and we were unable to recover it. 00:34:17.927 [2024-11-20 08:31:22.600658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.927 [2024-11-20 08:31:22.600714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.927 [2024-11-20 08:31:22.600739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.927 [2024-11-20 08:31:22.600748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.927 [2024-11-20 08:31:22.600755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.927 [2024-11-20 08:31:22.600774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.927 qpair failed and we were unable to recover it. 00:34:17.927 [2024-11-20 08:31:22.610683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.927 [2024-11-20 08:31:22.610745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.927 [2024-11-20 08:31:22.610771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.927 [2024-11-20 08:31:22.610779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.927 [2024-11-20 08:31:22.610787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.927 [2024-11-20 08:31:22.610806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-11-20 08:31:22.620688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.928 [2024-11-20 08:31:22.620742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.928 [2024-11-20 08:31:22.620756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.928 [2024-11-20 08:31:22.620764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.928 [2024-11-20 08:31:22.620770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.928 [2024-11-20 08:31:22.620785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-11-20 08:31:22.630737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.928 [2024-11-20 08:31:22.630794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.928 [2024-11-20 08:31:22.630808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.928 [2024-11-20 08:31:22.630815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.928 [2024-11-20 08:31:22.630821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.928 [2024-11-20 08:31:22.630836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-11-20 08:31:22.640769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.928 [2024-11-20 08:31:22.640871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.928 [2024-11-20 08:31:22.640885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.928 [2024-11-20 08:31:22.640892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.928 [2024-11-20 08:31:22.640898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.928 [2024-11-20 08:31:22.640912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-11-20 08:31:22.650820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.928 [2024-11-20 08:31:22.650914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.928 [2024-11-20 08:31:22.650927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.928 [2024-11-20 08:31:22.650934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.928 [2024-11-20 08:31:22.650940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:17.928 [2024-11-20 08:31:22.650954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:17.928 qpair failed and we were unable to recover it. 00:34:18.190 [2024-11-20 08:31:22.660825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.190 [2024-11-20 08:31:22.660885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.190 [2024-11-20 08:31:22.660899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.190 [2024-11-20 08:31:22.660905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.190 [2024-11-20 08:31:22.660912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.190 [2024-11-20 08:31:22.660925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.190 qpair failed and we were unable to recover it. 00:34:18.190 [2024-11-20 08:31:22.670842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.190 [2024-11-20 08:31:22.670937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.190 [2024-11-20 08:31:22.670954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.190 [2024-11-20 08:31:22.670961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.190 [2024-11-20 08:31:22.670967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.190 [2024-11-20 08:31:22.670981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.190 qpair failed and we were unable to recover it. 00:34:18.190 [2024-11-20 08:31:22.680869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.190 [2024-11-20 08:31:22.680927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.190 [2024-11-20 08:31:22.680940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.190 [2024-11-20 08:31:22.680947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.190 [2024-11-20 08:31:22.680953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.190 [2024-11-20 08:31:22.680967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.190 qpair failed and we were unable to recover it. 00:34:18.190 [2024-11-20 08:31:22.690898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.190 [2024-11-20 08:31:22.690990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.190 [2024-11-20 08:31:22.691004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.190 [2024-11-20 08:31:22.691011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.190 [2024-11-20 08:31:22.691017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.190 [2024-11-20 08:31:22.691031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.190 qpair failed and we were unable to recover it. 00:34:18.190 [2024-11-20 08:31:22.700911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.190 [2024-11-20 08:31:22.700969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.190 [2024-11-20 08:31:22.700983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.190 [2024-11-20 08:31:22.700990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.190 [2024-11-20 08:31:22.700996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.190 [2024-11-20 08:31:22.701010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.190 qpair failed and we were unable to recover it. 00:34:18.190 [2024-11-20 08:31:22.710970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.190 [2024-11-20 08:31:22.711025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.190 [2024-11-20 08:31:22.711038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.190 [2024-11-20 08:31:22.711045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.190 [2024-11-20 08:31:22.711051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.190 [2024-11-20 08:31:22.711068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.190 qpair failed and we were unable to recover it. 00:34:18.190 [2024-11-20 08:31:22.720857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.190 [2024-11-20 08:31:22.720911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.190 [2024-11-20 08:31:22.720925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.190 [2024-11-20 08:31:22.720932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.190 [2024-11-20 08:31:22.720938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.190 [2024-11-20 08:31:22.720952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.190 qpair failed and we were unable to recover it. 00:34:18.190 [2024-11-20 08:31:22.731029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.190 [2024-11-20 08:31:22.731092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.190 [2024-11-20 08:31:22.731105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.190 [2024-11-20 08:31:22.731112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.190 [2024-11-20 08:31:22.731118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.190 [2024-11-20 08:31:22.731132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.190 qpair failed and we were unable to recover it. 00:34:18.190 [2024-11-20 08:31:22.741039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.190 [2024-11-20 08:31:22.741094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.190 [2024-11-20 08:31:22.741107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.190 [2024-11-20 08:31:22.741114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.190 [2024-11-20 08:31:22.741120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.190 [2024-11-20 08:31:22.741134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.190 qpair failed and we were unable to recover it. 00:34:18.190 [2024-11-20 08:31:22.751102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.190 [2024-11-20 08:31:22.751159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.190 [2024-11-20 08:31:22.751173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.190 [2024-11-20 08:31:22.751180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.191 [2024-11-20 08:31:22.751186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.191 [2024-11-20 08:31:22.751200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.191 qpair failed and we were unable to recover it. 00:34:18.191 [2024-11-20 08:31:22.761120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.191 [2024-11-20 08:31:22.761170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.191 [2024-11-20 08:31:22.761184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.191 [2024-11-20 08:31:22.761191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.191 [2024-11-20 08:31:22.761197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.191 [2024-11-20 08:31:22.761211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.191 qpair failed and we were unable to recover it. 00:34:18.191 [2024-11-20 08:31:22.771125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.191 [2024-11-20 08:31:22.771179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.191 [2024-11-20 08:31:22.771192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.191 [2024-11-20 08:31:22.771199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.191 [2024-11-20 08:31:22.771205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.191 [2024-11-20 08:31:22.771219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.191 qpair failed and we were unable to recover it. 00:34:18.191 [2024-11-20 08:31:22.781143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.191 [2024-11-20 08:31:22.781200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.191 [2024-11-20 08:31:22.781213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.191 [2024-11-20 08:31:22.781220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.191 [2024-11-20 08:31:22.781226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.191 [2024-11-20 08:31:22.781239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.191 qpair failed and we were unable to recover it. 00:34:18.191 [2024-11-20 08:31:22.791162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.191 [2024-11-20 08:31:22.791218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.191 [2024-11-20 08:31:22.791231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.191 [2024-11-20 08:31:22.791238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.191 [2024-11-20 08:31:22.791244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.191 [2024-11-20 08:31:22.791257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.191 qpair failed and we were unable to recover it. 00:34:18.191 [2024-11-20 08:31:22.801223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.191 [2024-11-20 08:31:22.801281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.191 [2024-11-20 08:31:22.801298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.191 [2024-11-20 08:31:22.801305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.191 [2024-11-20 08:31:22.801311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.191 [2024-11-20 08:31:22.801324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.191 qpair failed and we were unable to recover it. 00:34:18.191 [2024-11-20 08:31:22.811129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.191 [2024-11-20 08:31:22.811181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.191 [2024-11-20 08:31:22.811198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.191 [2024-11-20 08:31:22.811205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.191 [2024-11-20 08:31:22.811211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.191 [2024-11-20 08:31:22.811226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.191 qpair failed and we were unable to recover it. 00:34:18.191 [2024-11-20 08:31:22.821285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.191 [2024-11-20 08:31:22.821342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.191 [2024-11-20 08:31:22.821355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.191 [2024-11-20 08:31:22.821362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.191 [2024-11-20 08:31:22.821369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.191 [2024-11-20 08:31:22.821382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.191 qpair failed and we were unable to recover it. 00:34:18.191 [2024-11-20 08:31:22.831332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.191 [2024-11-20 08:31:22.831387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.191 [2024-11-20 08:31:22.831400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.191 [2024-11-20 08:31:22.831407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.191 [2024-11-20 08:31:22.831414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.191 [2024-11-20 08:31:22.831427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.191 qpair failed and we were unable to recover it. 00:34:18.191 [2024-11-20 08:31:22.841339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.191 [2024-11-20 08:31:22.841395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.191 [2024-11-20 08:31:22.841409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.191 [2024-11-20 08:31:22.841416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.191 [2024-11-20 08:31:22.841422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.191 [2024-11-20 08:31:22.841439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.191 qpair failed and we were unable to recover it. 00:34:18.191 [2024-11-20 08:31:22.851359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.191 [2024-11-20 08:31:22.851418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.191 [2024-11-20 08:31:22.851432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.191 [2024-11-20 08:31:22.851439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.191 [2024-11-20 08:31:22.851445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.191 [2024-11-20 08:31:22.851459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.191 qpair failed and we were unable to recover it. 00:34:18.191 [2024-11-20 08:31:22.861387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.191 [2024-11-20 08:31:22.861447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.191 [2024-11-20 08:31:22.861462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.191 [2024-11-20 08:31:22.861469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.191 [2024-11-20 08:31:22.861476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.191 [2024-11-20 08:31:22.861493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.191 qpair failed and we were unable to recover it. 00:34:18.191 [2024-11-20 08:31:22.871430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.191 [2024-11-20 08:31:22.871509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.191 [2024-11-20 08:31:22.871523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.191 [2024-11-20 08:31:22.871530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.191 [2024-11-20 08:31:22.871536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.191 [2024-11-20 08:31:22.871550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.191 qpair failed and we were unable to recover it. 00:34:18.191 [2024-11-20 08:31:22.881454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.191 [2024-11-20 08:31:22.881504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.191 [2024-11-20 08:31:22.881518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.191 [2024-11-20 08:31:22.881524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.192 [2024-11-20 08:31:22.881531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.192 [2024-11-20 08:31:22.881544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.192 qpair failed and we were unable to recover it. 00:34:18.192 [2024-11-20 08:31:22.891348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.192 [2024-11-20 08:31:22.891403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.192 [2024-11-20 08:31:22.891416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.192 [2024-11-20 08:31:22.891423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.192 [2024-11-20 08:31:22.891429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.192 [2024-11-20 08:31:22.891443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.192 qpair failed and we were unable to recover it. 00:34:18.192 [2024-11-20 08:31:22.901493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.192 [2024-11-20 08:31:22.901547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.192 [2024-11-20 08:31:22.901560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.192 [2024-11-20 08:31:22.901567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.192 [2024-11-20 08:31:22.901574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.192 [2024-11-20 08:31:22.901588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.192 qpair failed and we were unable to recover it. 00:34:18.192 [2024-11-20 08:31:22.911532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.192 [2024-11-20 08:31:22.911586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.192 [2024-11-20 08:31:22.911600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.192 [2024-11-20 08:31:22.911606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.192 [2024-11-20 08:31:22.911613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.192 [2024-11-20 08:31:22.911626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.192 qpair failed and we were unable to recover it. 00:34:18.455 [2024-11-20 08:31:22.921526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.455 [2024-11-20 08:31:22.921583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.455 [2024-11-20 08:31:22.921596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.455 [2024-11-20 08:31:22.921603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.455 [2024-11-20 08:31:22.921610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.455 [2024-11-20 08:31:22.921623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.455 qpair failed and we were unable to recover it. 00:34:18.455 [2024-11-20 08:31:22.931587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.455 [2024-11-20 08:31:22.931636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.455 [2024-11-20 08:31:22.931652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.455 [2024-11-20 08:31:22.931659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.455 [2024-11-20 08:31:22.931666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.455 [2024-11-20 08:31:22.931679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.455 qpair failed and we were unable to recover it. 00:34:18.455 [2024-11-20 08:31:22.941600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.455 [2024-11-20 08:31:22.941657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.455 [2024-11-20 08:31:22.941670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.455 [2024-11-20 08:31:22.941677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.455 [2024-11-20 08:31:22.941684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.455 [2024-11-20 08:31:22.941697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.455 qpair failed and we were unable to recover it. 00:34:18.455 [2024-11-20 08:31:22.951653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.455 [2024-11-20 08:31:22.951713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.455 [2024-11-20 08:31:22.951738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.455 [2024-11-20 08:31:22.951747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.455 [2024-11-20 08:31:22.951754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.455 [2024-11-20 08:31:22.951773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.455 qpair failed and we were unable to recover it. 00:34:18.455 [2024-11-20 08:31:22.961668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.455 [2024-11-20 08:31:22.961722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.455 [2024-11-20 08:31:22.961747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.455 [2024-11-20 08:31:22.961756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.455 [2024-11-20 08:31:22.961763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.455 [2024-11-20 08:31:22.961781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.455 qpair failed and we were unable to recover it. 00:34:18.455 [2024-11-20 08:31:22.971690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.455 [2024-11-20 08:31:22.971744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.455 [2024-11-20 08:31:22.971759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.455 [2024-11-20 08:31:22.971767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.455 [2024-11-20 08:31:22.971773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.455 [2024-11-20 08:31:22.971793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.455 qpair failed and we were unable to recover it. 00:34:18.455 [2024-11-20 08:31:22.981590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.455 [2024-11-20 08:31:22.981644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.455 [2024-11-20 08:31:22.981658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.455 [2024-11-20 08:31:22.981665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.455 [2024-11-20 08:31:22.981671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.455 [2024-11-20 08:31:22.981685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.455 qpair failed and we were unable to recover it. 00:34:18.455 [2024-11-20 08:31:22.991623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.455 [2024-11-20 08:31:22.991686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.455 [2024-11-20 08:31:22.991699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.455 [2024-11-20 08:31:22.991706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.455 [2024-11-20 08:31:22.991712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.455 [2024-11-20 08:31:22.991726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.455 qpair failed and we were unable to recover it. 00:34:18.455 [2024-11-20 08:31:23.001770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.455 [2024-11-20 08:31:23.001824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.455 [2024-11-20 08:31:23.001838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.455 [2024-11-20 08:31:23.001844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.455 [2024-11-20 08:31:23.001851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.456 [2024-11-20 08:31:23.001867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.456 qpair failed and we were unable to recover it. 00:34:18.456 [2024-11-20 08:31:23.011817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.456 [2024-11-20 08:31:23.011884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.456 [2024-11-20 08:31:23.011897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.456 [2024-11-20 08:31:23.011904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.456 [2024-11-20 08:31:23.011910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.456 [2024-11-20 08:31:23.011923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.456 qpair failed and we were unable to recover it. 00:34:18.456 [2024-11-20 08:31:23.021708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.456 [2024-11-20 08:31:23.021762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.456 [2024-11-20 08:31:23.021777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.456 [2024-11-20 08:31:23.021784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.456 [2024-11-20 08:31:23.021790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.456 [2024-11-20 08:31:23.021804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.456 qpair failed and we were unable to recover it. 00:34:18.456 [2024-11-20 08:31:23.031857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.456 [2024-11-20 08:31:23.031923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.456 [2024-11-20 08:31:23.031937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.456 [2024-11-20 08:31:23.031944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.456 [2024-11-20 08:31:23.031950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.456 [2024-11-20 08:31:23.031964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.456 qpair failed and we were unable to recover it. 00:34:18.456 [2024-11-20 08:31:23.041869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.456 [2024-11-20 08:31:23.041928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.456 [2024-11-20 08:31:23.041942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.456 [2024-11-20 08:31:23.041948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.456 [2024-11-20 08:31:23.041955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.456 [2024-11-20 08:31:23.041968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.456 qpair failed and we were unable to recover it. 00:34:18.456 [2024-11-20 08:31:23.052023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.456 [2024-11-20 08:31:23.052073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.456 [2024-11-20 08:31:23.052086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.456 [2024-11-20 08:31:23.052093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.456 [2024-11-20 08:31:23.052099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.456 [2024-11-20 08:31:23.052112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.456 qpair failed and we were unable to recover it. 00:34:18.456 [2024-11-20 08:31:23.061948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.456 [2024-11-20 08:31:23.062004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.456 [2024-11-20 08:31:23.062021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.456 [2024-11-20 08:31:23.062028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.456 [2024-11-20 08:31:23.062034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.456 [2024-11-20 08:31:23.062048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.456 qpair failed and we were unable to recover it. 00:34:18.456 [2024-11-20 08:31:23.071986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.456 [2024-11-20 08:31:23.072045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.456 [2024-11-20 08:31:23.072059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.456 [2024-11-20 08:31:23.072065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.456 [2024-11-20 08:31:23.072072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.456 [2024-11-20 08:31:23.072086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.456 qpair failed and we were unable to recover it. 00:34:18.456 [2024-11-20 08:31:23.081968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.456 [2024-11-20 08:31:23.082058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.456 [2024-11-20 08:31:23.082071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.456 [2024-11-20 08:31:23.082078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.456 [2024-11-20 08:31:23.082085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.456 [2024-11-20 08:31:23.082098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.456 qpair failed and we were unable to recover it. 00:34:18.456 [2024-11-20 08:31:23.092021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.456 [2024-11-20 08:31:23.092085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.456 [2024-11-20 08:31:23.092099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.456 [2024-11-20 08:31:23.092106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.456 [2024-11-20 08:31:23.092112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.456 [2024-11-20 08:31:23.092126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.456 qpair failed and we were unable to recover it. 00:34:18.456 [2024-11-20 08:31:23.102048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.456 [2024-11-20 08:31:23.102154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.456 [2024-11-20 08:31:23.102167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.456 [2024-11-20 08:31:23.102174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.456 [2024-11-20 08:31:23.102181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.456 [2024-11-20 08:31:23.102202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.456 qpair failed and we were unable to recover it. 00:34:18.456 [2024-11-20 08:31:23.112095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.456 [2024-11-20 08:31:23.112197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.456 [2024-11-20 08:31:23.112211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.456 [2024-11-20 08:31:23.112218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.456 [2024-11-20 08:31:23.112224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.456 [2024-11-20 08:31:23.112238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.456 qpair failed and we were unable to recover it. 00:34:18.456 [2024-11-20 08:31:23.122110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.456 [2024-11-20 08:31:23.122160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.456 [2024-11-20 08:31:23.122173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.456 [2024-11-20 08:31:23.122180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.456 [2024-11-20 08:31:23.122186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.456 [2024-11-20 08:31:23.122199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.456 qpair failed and we were unable to recover it. 00:34:18.456 [2024-11-20 08:31:23.132147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.456 [2024-11-20 08:31:23.132199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.456 [2024-11-20 08:31:23.132212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.456 [2024-11-20 08:31:23.132219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.456 [2024-11-20 08:31:23.132226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.457 [2024-11-20 08:31:23.132240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.457 qpair failed and we were unable to recover it. 00:34:18.457 [2024-11-20 08:31:23.142174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.457 [2024-11-20 08:31:23.142234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.457 [2024-11-20 08:31:23.142249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.457 [2024-11-20 08:31:23.142256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.457 [2024-11-20 08:31:23.142262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.457 [2024-11-20 08:31:23.142280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.457 qpair failed and we were unable to recover it. 00:34:18.457 [2024-11-20 08:31:23.152214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.457 [2024-11-20 08:31:23.152265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.457 [2024-11-20 08:31:23.152279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.457 [2024-11-20 08:31:23.152286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.457 [2024-11-20 08:31:23.152293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.457 [2024-11-20 08:31:23.152306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.457 qpair failed and we were unable to recover it. 00:34:18.457 [2024-11-20 08:31:23.162191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.457 [2024-11-20 08:31:23.162251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.457 [2024-11-20 08:31:23.162264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.457 [2024-11-20 08:31:23.162271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.457 [2024-11-20 08:31:23.162277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.457 [2024-11-20 08:31:23.162291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.457 qpair failed and we were unable to recover it. 00:34:18.457 [2024-11-20 08:31:23.172124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.457 [2024-11-20 08:31:23.172222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.457 [2024-11-20 08:31:23.172235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.457 [2024-11-20 08:31:23.172242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.457 [2024-11-20 08:31:23.172249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.457 [2024-11-20 08:31:23.172262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.457 qpair failed and we were unable to recover it. 00:34:18.719 [2024-11-20 08:31:23.182290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.719 [2024-11-20 08:31:23.182391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.719 [2024-11-20 08:31:23.182404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.719 [2024-11-20 08:31:23.182411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.719 [2024-11-20 08:31:23.182417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.719 [2024-11-20 08:31:23.182431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.719 qpair failed and we were unable to recover it. 00:34:18.719 [2024-11-20 08:31:23.192308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.719 [2024-11-20 08:31:23.192399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.719 [2024-11-20 08:31:23.192416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.719 [2024-11-20 08:31:23.192423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.719 [2024-11-20 08:31:23.192429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.719 [2024-11-20 08:31:23.192442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.719 qpair failed and we were unable to recover it. 00:34:18.719 [2024-11-20 08:31:23.202370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.719 [2024-11-20 08:31:23.202445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.719 [2024-11-20 08:31:23.202458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.719 [2024-11-20 08:31:23.202465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.719 [2024-11-20 08:31:23.202471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.719 [2024-11-20 08:31:23.202485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.719 qpair failed and we were unable to recover it. 00:34:18.719 [2024-11-20 08:31:23.212353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.719 [2024-11-20 08:31:23.212404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.719 [2024-11-20 08:31:23.212417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.719 [2024-11-20 08:31:23.212424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.719 [2024-11-20 08:31:23.212430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.719 [2024-11-20 08:31:23.212443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.719 qpair failed and we were unable to recover it. 00:34:18.719 [2024-11-20 08:31:23.222398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.719 [2024-11-20 08:31:23.222452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.719 [2024-11-20 08:31:23.222464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.719 [2024-11-20 08:31:23.222471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.719 [2024-11-20 08:31:23.222478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.719 [2024-11-20 08:31:23.222491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.719 qpair failed and we were unable to recover it. 00:34:18.719 [2024-11-20 08:31:23.232443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.719 [2024-11-20 08:31:23.232502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.719 [2024-11-20 08:31:23.232516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.719 [2024-11-20 08:31:23.232523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.719 [2024-11-20 08:31:23.232530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.719 [2024-11-20 08:31:23.232547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.719 qpair failed and we were unable to recover it. 00:34:18.719 [2024-11-20 08:31:23.242452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.719 [2024-11-20 08:31:23.242505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.719 [2024-11-20 08:31:23.242519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.719 [2024-11-20 08:31:23.242526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.719 [2024-11-20 08:31:23.242532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.719 [2024-11-20 08:31:23.242546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.719 qpair failed and we were unable to recover it. 00:34:18.719 [2024-11-20 08:31:23.252462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.719 [2024-11-20 08:31:23.252518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.719 [2024-11-20 08:31:23.252531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.719 [2024-11-20 08:31:23.252538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.719 [2024-11-20 08:31:23.252544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.719 [2024-11-20 08:31:23.252558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.719 qpair failed and we were unable to recover it. 00:34:18.720 [2024-11-20 08:31:23.262508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.720 [2024-11-20 08:31:23.262564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.720 [2024-11-20 08:31:23.262581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.720 [2024-11-20 08:31:23.262589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.720 [2024-11-20 08:31:23.262595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.720 [2024-11-20 08:31:23.262610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.720 qpair failed and we were unable to recover it. 00:34:18.720 [2024-11-20 08:31:23.272533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.720 [2024-11-20 08:31:23.272587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.720 [2024-11-20 08:31:23.272600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.720 [2024-11-20 08:31:23.272607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.720 [2024-11-20 08:31:23.272613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.720 [2024-11-20 08:31:23.272627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.720 qpair failed and we were unable to recover it. 00:34:18.720 [2024-11-20 08:31:23.282545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.720 [2024-11-20 08:31:23.282601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.720 [2024-11-20 08:31:23.282615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.720 [2024-11-20 08:31:23.282621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.720 [2024-11-20 08:31:23.282628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.720 [2024-11-20 08:31:23.282642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.720 qpair failed and we were unable to recover it. 00:34:18.720 [2024-11-20 08:31:23.292581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.720 [2024-11-20 08:31:23.292632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.720 [2024-11-20 08:31:23.292645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.720 [2024-11-20 08:31:23.292652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.720 [2024-11-20 08:31:23.292658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.720 [2024-11-20 08:31:23.292672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.720 qpair failed and we were unable to recover it. 00:34:18.720 [2024-11-20 08:31:23.302600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.720 [2024-11-20 08:31:23.302653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.720 [2024-11-20 08:31:23.302667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.720 [2024-11-20 08:31:23.302674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.720 [2024-11-20 08:31:23.302680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.720 [2024-11-20 08:31:23.302694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.720 qpair failed and we were unable to recover it. 00:34:18.720 [2024-11-20 08:31:23.312690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.720 [2024-11-20 08:31:23.312751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.720 [2024-11-20 08:31:23.312764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.720 [2024-11-20 08:31:23.312770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.720 [2024-11-20 08:31:23.312777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.720 [2024-11-20 08:31:23.312791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.720 qpair failed and we were unable to recover it. 00:34:18.720 [2024-11-20 08:31:23.322663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.720 [2024-11-20 08:31:23.322749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.720 [2024-11-20 08:31:23.322766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.720 [2024-11-20 08:31:23.322773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.720 [2024-11-20 08:31:23.322779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.720 [2024-11-20 08:31:23.322793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.720 qpair failed and we were unable to recover it. 00:34:18.720 [2024-11-20 08:31:23.332683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.720 [2024-11-20 08:31:23.332736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.720 [2024-11-20 08:31:23.332749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.720 [2024-11-20 08:31:23.332756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.720 [2024-11-20 08:31:23.332762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.720 [2024-11-20 08:31:23.332776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.720 qpair failed and we were unable to recover it. 00:34:18.720 [2024-11-20 08:31:23.342718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.720 [2024-11-20 08:31:23.342769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.720 [2024-11-20 08:31:23.342782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.720 [2024-11-20 08:31:23.342789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.720 [2024-11-20 08:31:23.342795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.720 [2024-11-20 08:31:23.342809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.720 qpair failed and we were unable to recover it. 00:34:18.720 [2024-11-20 08:31:23.352752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.720 [2024-11-20 08:31:23.352809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.720 [2024-11-20 08:31:23.352822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.720 [2024-11-20 08:31:23.352829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.720 [2024-11-20 08:31:23.352835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.720 [2024-11-20 08:31:23.352848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.720 qpair failed and we were unable to recover it. 00:34:18.720 [2024-11-20 08:31:23.362785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.720 [2024-11-20 08:31:23.362837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.720 [2024-11-20 08:31:23.362850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.720 [2024-11-20 08:31:23.362857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.720 [2024-11-20 08:31:23.362867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.720 [2024-11-20 08:31:23.362885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.720 qpair failed and we were unable to recover it. 00:34:18.720 [2024-11-20 08:31:23.372703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.720 [2024-11-20 08:31:23.372804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.720 [2024-11-20 08:31:23.372817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.720 [2024-11-20 08:31:23.372824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.720 [2024-11-20 08:31:23.372830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.720 [2024-11-20 08:31:23.372844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.720 qpair failed and we were unable to recover it. 00:34:18.720 [2024-11-20 08:31:23.382855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.720 [2024-11-20 08:31:23.382917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.720 [2024-11-20 08:31:23.382930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.720 [2024-11-20 08:31:23.382937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.720 [2024-11-20 08:31:23.382943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.720 [2024-11-20 08:31:23.382957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.720 qpair failed and we were unable to recover it. 00:34:18.721 [2024-11-20 08:31:23.392921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.721 [2024-11-20 08:31:23.392987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.721 [2024-11-20 08:31:23.393000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.721 [2024-11-20 08:31:23.393007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.721 [2024-11-20 08:31:23.393013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.721 [2024-11-20 08:31:23.393027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.721 qpair failed and we were unable to recover it. 00:34:18.721 [2024-11-20 08:31:23.402904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.721 [2024-11-20 08:31:23.402955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.721 [2024-11-20 08:31:23.402968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.721 [2024-11-20 08:31:23.402975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.721 [2024-11-20 08:31:23.402981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.721 [2024-11-20 08:31:23.402995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.721 qpair failed and we were unable to recover it. 00:34:18.721 [2024-11-20 08:31:23.412935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.721 [2024-11-20 08:31:23.412984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.721 [2024-11-20 08:31:23.412997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.721 [2024-11-20 08:31:23.413004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.721 [2024-11-20 08:31:23.413010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.721 [2024-11-20 08:31:23.413024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.721 qpair failed and we were unable to recover it. 00:34:18.721 [2024-11-20 08:31:23.422977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.721 [2024-11-20 08:31:23.423028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.721 [2024-11-20 08:31:23.423041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.721 [2024-11-20 08:31:23.423048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.721 [2024-11-20 08:31:23.423054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.721 [2024-11-20 08:31:23.423068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.721 qpair failed and we were unable to recover it. 00:34:18.721 [2024-11-20 08:31:23.433000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.721 [2024-11-20 08:31:23.433092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.721 [2024-11-20 08:31:23.433105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.721 [2024-11-20 08:31:23.433111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.721 [2024-11-20 08:31:23.433118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.721 [2024-11-20 08:31:23.433131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.721 qpair failed and we were unable to recover it. 00:34:18.721 [2024-11-20 08:31:23.443029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.721 [2024-11-20 08:31:23.443087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.721 [2024-11-20 08:31:23.443101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.721 [2024-11-20 08:31:23.443108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.721 [2024-11-20 08:31:23.443114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.721 [2024-11-20 08:31:23.443128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.721 qpair failed and we were unable to recover it. 00:34:18.982 [2024-11-20 08:31:23.453048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.982 [2024-11-20 08:31:23.453099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.982 [2024-11-20 08:31:23.453117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.982 [2024-11-20 08:31:23.453124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.982 [2024-11-20 08:31:23.453130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.982 [2024-11-20 08:31:23.453143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-11-20 08:31:23.463075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.982 [2024-11-20 08:31:23.463184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.982 [2024-11-20 08:31:23.463198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.982 [2024-11-20 08:31:23.463205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.982 [2024-11-20 08:31:23.463211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.982 [2024-11-20 08:31:23.463225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-11-20 08:31:23.473109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.982 [2024-11-20 08:31:23.473168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.982 [2024-11-20 08:31:23.473181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.982 [2024-11-20 08:31:23.473188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.982 [2024-11-20 08:31:23.473194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.982 [2024-11-20 08:31:23.473208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.982 qpair failed and we were unable to recover it. 00:34:18.982 [2024-11-20 08:31:23.483168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.982 [2024-11-20 08:31:23.483236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.982 [2024-11-20 08:31:23.483249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.983 [2024-11-20 08:31:23.483255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.983 [2024-11-20 08:31:23.483262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.983 [2024-11-20 08:31:23.483275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-11-20 08:31:23.493167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.983 [2024-11-20 08:31:23.493216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.983 [2024-11-20 08:31:23.493229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.983 [2024-11-20 08:31:23.493236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.983 [2024-11-20 08:31:23.493242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.983 [2024-11-20 08:31:23.493259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-11-20 08:31:23.503176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.983 [2024-11-20 08:31:23.503256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.983 [2024-11-20 08:31:23.503269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.983 [2024-11-20 08:31:23.503276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.983 [2024-11-20 08:31:23.503282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.983 [2024-11-20 08:31:23.503295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-11-20 08:31:23.513188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.983 [2024-11-20 08:31:23.513245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.983 [2024-11-20 08:31:23.513258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.983 [2024-11-20 08:31:23.513265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.983 [2024-11-20 08:31:23.513271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.983 [2024-11-20 08:31:23.513284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-11-20 08:31:23.523234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.983 [2024-11-20 08:31:23.523294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.983 [2024-11-20 08:31:23.523307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.983 [2024-11-20 08:31:23.523313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.983 [2024-11-20 08:31:23.523320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.983 [2024-11-20 08:31:23.523333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-11-20 08:31:23.533260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.983 [2024-11-20 08:31:23.533318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.983 [2024-11-20 08:31:23.533331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.983 [2024-11-20 08:31:23.533338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.983 [2024-11-20 08:31:23.533345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.983 [2024-11-20 08:31:23.533358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-11-20 08:31:23.543294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.983 [2024-11-20 08:31:23.543350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.983 [2024-11-20 08:31:23.543363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.983 [2024-11-20 08:31:23.543370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.983 [2024-11-20 08:31:23.543376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.983 [2024-11-20 08:31:23.543389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-11-20 08:31:23.553319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.983 [2024-11-20 08:31:23.553376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.983 [2024-11-20 08:31:23.553389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.983 [2024-11-20 08:31:23.553395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.983 [2024-11-20 08:31:23.553402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.983 [2024-11-20 08:31:23.553415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-11-20 08:31:23.563349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.983 [2024-11-20 08:31:23.563400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.983 [2024-11-20 08:31:23.563413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.983 [2024-11-20 08:31:23.563420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.983 [2024-11-20 08:31:23.563426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.983 [2024-11-20 08:31:23.563440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-11-20 08:31:23.573230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.983 [2024-11-20 08:31:23.573281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.983 [2024-11-20 08:31:23.573295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.983 [2024-11-20 08:31:23.573302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.983 [2024-11-20 08:31:23.573308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.983 [2024-11-20 08:31:23.573321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-11-20 08:31:23.583397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.983 [2024-11-20 08:31:23.583452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.983 [2024-11-20 08:31:23.583468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.983 [2024-11-20 08:31:23.583475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.983 [2024-11-20 08:31:23.583481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.983 [2024-11-20 08:31:23.583494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-11-20 08:31:23.593317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.983 [2024-11-20 08:31:23.593416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.983 [2024-11-20 08:31:23.593430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.983 [2024-11-20 08:31:23.593437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.983 [2024-11-20 08:31:23.593443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.983 [2024-11-20 08:31:23.593457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-11-20 08:31:23.603476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.983 [2024-11-20 08:31:23.603530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.983 [2024-11-20 08:31:23.603543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.983 [2024-11-20 08:31:23.603550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.983 [2024-11-20 08:31:23.603556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.983 [2024-11-20 08:31:23.603570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.983 qpair failed and we were unable to recover it. 00:34:18.983 [2024-11-20 08:31:23.613472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.983 [2024-11-20 08:31:23.613523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.983 [2024-11-20 08:31:23.613536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.983 [2024-11-20 08:31:23.613543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.984 [2024-11-20 08:31:23.613549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.984 [2024-11-20 08:31:23.613562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-11-20 08:31:23.623493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.984 [2024-11-20 08:31:23.623547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.984 [2024-11-20 08:31:23.623561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.984 [2024-11-20 08:31:23.623567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.984 [2024-11-20 08:31:23.623574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.984 [2024-11-20 08:31:23.623592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-11-20 08:31:23.633537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.984 [2024-11-20 08:31:23.633592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.984 [2024-11-20 08:31:23.633605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.984 [2024-11-20 08:31:23.633612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.984 [2024-11-20 08:31:23.633619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.984 [2024-11-20 08:31:23.633633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-11-20 08:31:23.643545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.984 [2024-11-20 08:31:23.643642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.984 [2024-11-20 08:31:23.643667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.984 [2024-11-20 08:31:23.643676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.984 [2024-11-20 08:31:23.643683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.984 [2024-11-20 08:31:23.643702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-11-20 08:31:23.653580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.984 [2024-11-20 08:31:23.653634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.984 [2024-11-20 08:31:23.653649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.984 [2024-11-20 08:31:23.653657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.984 [2024-11-20 08:31:23.653663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.984 [2024-11-20 08:31:23.653678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-11-20 08:31:23.663607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.984 [2024-11-20 08:31:23.663674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.984 [2024-11-20 08:31:23.663688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.984 [2024-11-20 08:31:23.663695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.984 [2024-11-20 08:31:23.663702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.984 [2024-11-20 08:31:23.663716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-11-20 08:31:23.673653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.984 [2024-11-20 08:31:23.673709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.984 [2024-11-20 08:31:23.673722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.984 [2024-11-20 08:31:23.673729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.984 [2024-11-20 08:31:23.673736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.984 [2024-11-20 08:31:23.673749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-11-20 08:31:23.683667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.984 [2024-11-20 08:31:23.683718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.984 [2024-11-20 08:31:23.683732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.984 [2024-11-20 08:31:23.683739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.984 [2024-11-20 08:31:23.683745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.984 [2024-11-20 08:31:23.683759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-11-20 08:31:23.693691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.984 [2024-11-20 08:31:23.693784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.984 [2024-11-20 08:31:23.693798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.984 [2024-11-20 08:31:23.693805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.984 [2024-11-20 08:31:23.693811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.984 [2024-11-20 08:31:23.693825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.984 qpair failed and we were unable to recover it. 00:34:18.984 [2024-11-20 08:31:23.703724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.984 [2024-11-20 08:31:23.703776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.984 [2024-11-20 08:31:23.703789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.984 [2024-11-20 08:31:23.703796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.984 [2024-11-20 08:31:23.703802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:18.984 [2024-11-20 08:31:23.703816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.984 qpair failed and we were unable to recover it. 00:34:19.246 [2024-11-20 08:31:23.713763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.246 [2024-11-20 08:31:23.713819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.246 [2024-11-20 08:31:23.713836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.246 [2024-11-20 08:31:23.713843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.246 [2024-11-20 08:31:23.713850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.246 [2024-11-20 08:31:23.713867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-11-20 08:31:23.723823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.246 [2024-11-20 08:31:23.723894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.246 [2024-11-20 08:31:23.723907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.246 [2024-11-20 08:31:23.723914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.246 [2024-11-20 08:31:23.723920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.246 [2024-11-20 08:31:23.723934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-11-20 08:31:23.733773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.246 [2024-11-20 08:31:23.733860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.246 [2024-11-20 08:31:23.733876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.246 [2024-11-20 08:31:23.733883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.246 [2024-11-20 08:31:23.733889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.246 [2024-11-20 08:31:23.733903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-11-20 08:31:23.743827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.247 [2024-11-20 08:31:23.743912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.247 [2024-11-20 08:31:23.743925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.247 [2024-11-20 08:31:23.743932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.247 [2024-11-20 08:31:23.743938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.247 [2024-11-20 08:31:23.743952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-11-20 08:31:23.753885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.247 [2024-11-20 08:31:23.753985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.247 [2024-11-20 08:31:23.753998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.247 [2024-11-20 08:31:23.754005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.247 [2024-11-20 08:31:23.754012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.247 [2024-11-20 08:31:23.754034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-11-20 08:31:23.763914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.247 [2024-11-20 08:31:23.763968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.247 [2024-11-20 08:31:23.763981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.247 [2024-11-20 08:31:23.763987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.247 [2024-11-20 08:31:23.763993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.247 [2024-11-20 08:31:23.764007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-11-20 08:31:23.773814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.247 [2024-11-20 08:31:23.773870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.247 [2024-11-20 08:31:23.773884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.247 [2024-11-20 08:31:23.773890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.247 [2024-11-20 08:31:23.773897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.247 [2024-11-20 08:31:23.773910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-11-20 08:31:23.783857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.247 [2024-11-20 08:31:23.783922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.247 [2024-11-20 08:31:23.783935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.247 [2024-11-20 08:31:23.783942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.247 [2024-11-20 08:31:23.783948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.247 [2024-11-20 08:31:23.783961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-11-20 08:31:23.793988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.247 [2024-11-20 08:31:23.794049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.247 [2024-11-20 08:31:23.794063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.247 [2024-11-20 08:31:23.794069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.247 [2024-11-20 08:31:23.794076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.247 [2024-11-20 08:31:23.794089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-11-20 08:31:23.803994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.247 [2024-11-20 08:31:23.804047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.247 [2024-11-20 08:31:23.804064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.247 [2024-11-20 08:31:23.804071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.247 [2024-11-20 08:31:23.804077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.247 [2024-11-20 08:31:23.804093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-11-20 08:31:23.814023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.247 [2024-11-20 08:31:23.814075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.247 [2024-11-20 08:31:23.814089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.247 [2024-11-20 08:31:23.814096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.247 [2024-11-20 08:31:23.814102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.247 [2024-11-20 08:31:23.814116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-11-20 08:31:23.823990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.247 [2024-11-20 08:31:23.824046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.247 [2024-11-20 08:31:23.824059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.247 [2024-11-20 08:31:23.824066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.247 [2024-11-20 08:31:23.824072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.247 [2024-11-20 08:31:23.824086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-11-20 08:31:23.834109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.247 [2024-11-20 08:31:23.834160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.247 [2024-11-20 08:31:23.834172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.247 [2024-11-20 08:31:23.834179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.247 [2024-11-20 08:31:23.834186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.247 [2024-11-20 08:31:23.834199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-11-20 08:31:23.844113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.247 [2024-11-20 08:31:23.844164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.247 [2024-11-20 08:31:23.844181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.247 [2024-11-20 08:31:23.844188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.247 [2024-11-20 08:31:23.844194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.247 [2024-11-20 08:31:23.844207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-11-20 08:31:23.854172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.247 [2024-11-20 08:31:23.854233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.247 [2024-11-20 08:31:23.854248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.247 [2024-11-20 08:31:23.854255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.247 [2024-11-20 08:31:23.854261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.247 [2024-11-20 08:31:23.854275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-11-20 08:31:23.864223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.247 [2024-11-20 08:31:23.864294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.247 [2024-11-20 08:31:23.864307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.247 [2024-11-20 08:31:23.864314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.247 [2024-11-20 08:31:23.864320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.247 [2024-11-20 08:31:23.864333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-11-20 08:31:23.874223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.248 [2024-11-20 08:31:23.874282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.248 [2024-11-20 08:31:23.874295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.248 [2024-11-20 08:31:23.874302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.248 [2024-11-20 08:31:23.874308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.248 [2024-11-20 08:31:23.874322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-11-20 08:31:23.884247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.248 [2024-11-20 08:31:23.884304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.248 [2024-11-20 08:31:23.884317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.248 [2024-11-20 08:31:23.884324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.248 [2024-11-20 08:31:23.884330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.248 [2024-11-20 08:31:23.884347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-11-20 08:31:23.894258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.248 [2024-11-20 08:31:23.894320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.248 [2024-11-20 08:31:23.894336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.248 [2024-11-20 08:31:23.894343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.248 [2024-11-20 08:31:23.894351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.248 [2024-11-20 08:31:23.894367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-11-20 08:31:23.904286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.248 [2024-11-20 08:31:23.904341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.248 [2024-11-20 08:31:23.904355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.248 [2024-11-20 08:31:23.904362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.248 [2024-11-20 08:31:23.904368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.248 [2024-11-20 08:31:23.904382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-11-20 08:31:23.914333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.248 [2024-11-20 08:31:23.914395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.248 [2024-11-20 08:31:23.914408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.248 [2024-11-20 08:31:23.914415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.248 [2024-11-20 08:31:23.914421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.248 [2024-11-20 08:31:23.914434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-11-20 08:31:23.924407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.248 [2024-11-20 08:31:23.924458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.248 [2024-11-20 08:31:23.924471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.248 [2024-11-20 08:31:23.924478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.248 [2024-11-20 08:31:23.924484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.248 [2024-11-20 08:31:23.924498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-11-20 08:31:23.934483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.248 [2024-11-20 08:31:23.934550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.248 [2024-11-20 08:31:23.934563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.248 [2024-11-20 08:31:23.934570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.248 [2024-11-20 08:31:23.934576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.248 [2024-11-20 08:31:23.934589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-11-20 08:31:23.944464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.248 [2024-11-20 08:31:23.944516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.248 [2024-11-20 08:31:23.944529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.248 [2024-11-20 08:31:23.944536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.248 [2024-11-20 08:31:23.944543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.248 [2024-11-20 08:31:23.944556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-11-20 08:31:23.954491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.248 [2024-11-20 08:31:23.954550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.248 [2024-11-20 08:31:23.954563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.248 [2024-11-20 08:31:23.954569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.248 [2024-11-20 08:31:23.954576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.248 [2024-11-20 08:31:23.954589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-11-20 08:31:23.964515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.248 [2024-11-20 08:31:23.964567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.248 [2024-11-20 08:31:23.964580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.248 [2024-11-20 08:31:23.964587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.248 [2024-11-20 08:31:23.964593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.248 [2024-11-20 08:31:23.964606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.511 [2024-11-20 08:31:23.974389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.511 [2024-11-20 08:31:23.974491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.511 [2024-11-20 08:31:23.974508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.512 [2024-11-20 08:31:23.974515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.512 [2024-11-20 08:31:23.974521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.512 [2024-11-20 08:31:23.974535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.512 qpair failed and we were unable to recover it. 00:34:19.512 [2024-11-20 08:31:23.984506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.512 [2024-11-20 08:31:23.984584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.512 [2024-11-20 08:31:23.984597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.512 [2024-11-20 08:31:23.984604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.512 [2024-11-20 08:31:23.984610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.512 [2024-11-20 08:31:23.984623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.512 qpair failed and we were unable to recover it. 00:34:19.512 [2024-11-20 08:31:23.994562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.512 [2024-11-20 08:31:23.994626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.512 [2024-11-20 08:31:23.994652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.512 [2024-11-20 08:31:23.994660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.512 [2024-11-20 08:31:23.994667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.512 [2024-11-20 08:31:23.994685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.512 qpair failed and we were unable to recover it. 00:34:19.512 [2024-11-20 08:31:24.004573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.512 [2024-11-20 08:31:24.004634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.512 [2024-11-20 08:31:24.004659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.512 [2024-11-20 08:31:24.004668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.512 [2024-11-20 08:31:24.004675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.512 [2024-11-20 08:31:24.004694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.512 qpair failed and we were unable to recover it. 00:34:19.512 [2024-11-20 08:31:24.014593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.512 [2024-11-20 08:31:24.014646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.512 [2024-11-20 08:31:24.014662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.512 [2024-11-20 08:31:24.014669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.512 [2024-11-20 08:31:24.014675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.512 [2024-11-20 08:31:24.014695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.512 qpair failed and we were unable to recover it. 00:34:19.512 [2024-11-20 08:31:24.024638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.512 [2024-11-20 08:31:24.024696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.512 [2024-11-20 08:31:24.024709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.512 [2024-11-20 08:31:24.024716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.512 [2024-11-20 08:31:24.024722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.512 [2024-11-20 08:31:24.024736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.512 qpair failed and we were unable to recover it. 00:34:19.512 [2024-11-20 08:31:24.034551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.512 [2024-11-20 08:31:24.034613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.512 [2024-11-20 08:31:24.034628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.512 [2024-11-20 08:31:24.034635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.512 [2024-11-20 08:31:24.034641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.512 [2024-11-20 08:31:24.034656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.512 qpair failed and we were unable to recover it. 00:34:19.512 [2024-11-20 08:31:24.044661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.512 [2024-11-20 08:31:24.044717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.512 [2024-11-20 08:31:24.044731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.512 [2024-11-20 08:31:24.044738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.512 [2024-11-20 08:31:24.044745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.512 [2024-11-20 08:31:24.044759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.512 qpair failed and we were unable to recover it. 00:34:19.512 [2024-11-20 08:31:24.054707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.512 [2024-11-20 08:31:24.054758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.512 [2024-11-20 08:31:24.054771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.512 [2024-11-20 08:31:24.054778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.512 [2024-11-20 08:31:24.054785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.512 [2024-11-20 08:31:24.054799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.512 qpair failed and we were unable to recover it. 00:34:19.512 [2024-11-20 08:31:24.064724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.512 [2024-11-20 08:31:24.064783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.512 [2024-11-20 08:31:24.064798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.512 [2024-11-20 08:31:24.064805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.512 [2024-11-20 08:31:24.064811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.512 [2024-11-20 08:31:24.064825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.512 qpair failed and we were unable to recover it. 00:34:19.512 [2024-11-20 08:31:24.074773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.512 [2024-11-20 08:31:24.074827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.512 [2024-11-20 08:31:24.074841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.512 [2024-11-20 08:31:24.074848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.512 [2024-11-20 08:31:24.074855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.512 [2024-11-20 08:31:24.074872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.512 qpair failed and we were unable to recover it. 00:34:19.512 [2024-11-20 08:31:24.084806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.512 [2024-11-20 08:31:24.084872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.512 [2024-11-20 08:31:24.084886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.512 [2024-11-20 08:31:24.084892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.512 [2024-11-20 08:31:24.084899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.512 [2024-11-20 08:31:24.084913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.512 qpair failed and we were unable to recover it. 00:34:19.512 [2024-11-20 08:31:24.094825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.512 [2024-11-20 08:31:24.094923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.512 [2024-11-20 08:31:24.094937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.512 [2024-11-20 08:31:24.094944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.512 [2024-11-20 08:31:24.094951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.512 [2024-11-20 08:31:24.094965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.512 qpair failed and we were unable to recover it. 00:34:19.512 [2024-11-20 08:31:24.104878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.512 [2024-11-20 08:31:24.104934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.513 [2024-11-20 08:31:24.104951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.513 [2024-11-20 08:31:24.104958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.513 [2024-11-20 08:31:24.104964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.513 [2024-11-20 08:31:24.104978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.513 qpair failed and we were unable to recover it. 00:34:19.513 [2024-11-20 08:31:24.114826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.513 [2024-11-20 08:31:24.114882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.513 [2024-11-20 08:31:24.114895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.513 [2024-11-20 08:31:24.114902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.513 [2024-11-20 08:31:24.114909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.513 [2024-11-20 08:31:24.114922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.513 qpair failed and we were unable to recover it. 00:34:19.513 [2024-11-20 08:31:24.124929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.513 [2024-11-20 08:31:24.124981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.513 [2024-11-20 08:31:24.124994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.513 [2024-11-20 08:31:24.125001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.513 [2024-11-20 08:31:24.125008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.513 [2024-11-20 08:31:24.125021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.513 qpair failed and we were unable to recover it. 00:34:19.513 [2024-11-20 08:31:24.134931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.513 [2024-11-20 08:31:24.135008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.513 [2024-11-20 08:31:24.135021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.513 [2024-11-20 08:31:24.135028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.513 [2024-11-20 08:31:24.135035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.513 [2024-11-20 08:31:24.135048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.513 qpair failed and we were unable to recover it. 00:34:19.513 [2024-11-20 08:31:24.144964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.513 [2024-11-20 08:31:24.145021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.513 [2024-11-20 08:31:24.145034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.513 [2024-11-20 08:31:24.145040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.513 [2024-11-20 08:31:24.145047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.513 [2024-11-20 08:31:24.145064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.513 qpair failed and we were unable to recover it. 00:34:19.513 [2024-11-20 08:31:24.154892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.513 [2024-11-20 08:31:24.154952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.513 [2024-11-20 08:31:24.154965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.513 [2024-11-20 08:31:24.154972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.513 [2024-11-20 08:31:24.154979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.513 [2024-11-20 08:31:24.154992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.513 qpair failed and we were unable to recover it. 00:34:19.513 [2024-11-20 08:31:24.165021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.513 [2024-11-20 08:31:24.165104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.513 [2024-11-20 08:31:24.165118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.513 [2024-11-20 08:31:24.165124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.513 [2024-11-20 08:31:24.165131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.513 [2024-11-20 08:31:24.165145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.513 qpair failed and we were unable to recover it. 00:34:19.513 [2024-11-20 08:31:24.175058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.513 [2024-11-20 08:31:24.175111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.513 [2024-11-20 08:31:24.175125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.513 [2024-11-20 08:31:24.175132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.513 [2024-11-20 08:31:24.175138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.513 [2024-11-20 08:31:24.175152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.513 qpair failed and we were unable to recover it. 00:34:19.513 [2024-11-20 08:31:24.184963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.513 [2024-11-20 08:31:24.185020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.513 [2024-11-20 08:31:24.185034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.513 [2024-11-20 08:31:24.185042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.513 [2024-11-20 08:31:24.185048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.513 [2024-11-20 08:31:24.185062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.513 qpair failed and we were unable to recover it. 00:34:19.513 [2024-11-20 08:31:24.195099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.513 [2024-11-20 08:31:24.195181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.513 [2024-11-20 08:31:24.195195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.513 [2024-11-20 08:31:24.195202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.513 [2024-11-20 08:31:24.195208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.513 [2024-11-20 08:31:24.195222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.513 qpair failed and we were unable to recover it. 00:34:19.513 [2024-11-20 08:31:24.205122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.513 [2024-11-20 08:31:24.205173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.513 [2024-11-20 08:31:24.205186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.513 [2024-11-20 08:31:24.205193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.513 [2024-11-20 08:31:24.205199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.513 [2024-11-20 08:31:24.205212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.513 qpair failed and we were unable to recover it. 00:34:19.513 [2024-11-20 08:31:24.215178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.513 [2024-11-20 08:31:24.215233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.513 [2024-11-20 08:31:24.215246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.513 [2024-11-20 08:31:24.215253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.513 [2024-11-20 08:31:24.215260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.513 [2024-11-20 08:31:24.215273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.513 qpair failed and we were unable to recover it. 00:34:19.513 [2024-11-20 08:31:24.225073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.513 [2024-11-20 08:31:24.225133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.513 [2024-11-20 08:31:24.225146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.513 [2024-11-20 08:31:24.225153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.513 [2024-11-20 08:31:24.225159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.513 [2024-11-20 08:31:24.225173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.513 qpair failed and we were unable to recover it. 00:34:19.513 [2024-11-20 08:31:24.235232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.513 [2024-11-20 08:31:24.235286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.514 [2024-11-20 08:31:24.235300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.514 [2024-11-20 08:31:24.235310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.514 [2024-11-20 08:31:24.235317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.514 [2024-11-20 08:31:24.235330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.514 qpair failed and we were unable to recover it. 00:34:19.775 [2024-11-20 08:31:24.245244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.775 [2024-11-20 08:31:24.245349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.775 [2024-11-20 08:31:24.245362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.775 [2024-11-20 08:31:24.245369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.775 [2024-11-20 08:31:24.245376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.775 [2024-11-20 08:31:24.245389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.775 qpair failed and we were unable to recover it. 00:34:19.776 [2024-11-20 08:31:24.255250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.776 [2024-11-20 08:31:24.255300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.776 [2024-11-20 08:31:24.255313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.776 [2024-11-20 08:31:24.255320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.776 [2024-11-20 08:31:24.255326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.776 [2024-11-20 08:31:24.255339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.776 qpair failed and we were unable to recover it. 00:34:19.776 [2024-11-20 08:31:24.265181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.776 [2024-11-20 08:31:24.265239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.776 [2024-11-20 08:31:24.265252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.776 [2024-11-20 08:31:24.265259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.776 [2024-11-20 08:31:24.265265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.776 [2024-11-20 08:31:24.265278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.776 qpair failed and we were unable to recover it. 00:34:19.776 [2024-11-20 08:31:24.275318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.776 [2024-11-20 08:31:24.275377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.776 [2024-11-20 08:31:24.275391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.776 [2024-11-20 08:31:24.275398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.776 [2024-11-20 08:31:24.275404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.776 [2024-11-20 08:31:24.275422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.776 qpair failed and we were unable to recover it. 00:34:19.776 [2024-11-20 08:31:24.285362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.776 [2024-11-20 08:31:24.285423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.776 [2024-11-20 08:31:24.285436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.776 [2024-11-20 08:31:24.285442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.776 [2024-11-20 08:31:24.285449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.776 [2024-11-20 08:31:24.285462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.776 qpair failed and we were unable to recover it. 00:34:19.776 [2024-11-20 08:31:24.295376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.776 [2024-11-20 08:31:24.295430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.776 [2024-11-20 08:31:24.295443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.776 [2024-11-20 08:31:24.295450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.776 [2024-11-20 08:31:24.295456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.776 [2024-11-20 08:31:24.295470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.776 qpair failed and we were unable to recover it. 00:34:19.776 [2024-11-20 08:31:24.305288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.776 [2024-11-20 08:31:24.305343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.776 [2024-11-20 08:31:24.305356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.776 [2024-11-20 08:31:24.305362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.776 [2024-11-20 08:31:24.305369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.776 [2024-11-20 08:31:24.305382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.776 qpair failed and we were unable to recover it. 00:34:19.776 [2024-11-20 08:31:24.315435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.776 [2024-11-20 08:31:24.315486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.776 [2024-11-20 08:31:24.315499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.776 [2024-11-20 08:31:24.315506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.776 [2024-11-20 08:31:24.315512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.776 [2024-11-20 08:31:24.315525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.776 qpair failed and we were unable to recover it. 00:34:19.776 [2024-11-20 08:31:24.325458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.776 [2024-11-20 08:31:24.325514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.776 [2024-11-20 08:31:24.325528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.776 [2024-11-20 08:31:24.325535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.776 [2024-11-20 08:31:24.325541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.776 [2024-11-20 08:31:24.325554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.776 qpair failed and we were unable to recover it. 00:34:19.776 [2024-11-20 08:31:24.335483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.776 [2024-11-20 08:31:24.335569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.776 [2024-11-20 08:31:24.335582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.776 [2024-11-20 08:31:24.335589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.776 [2024-11-20 08:31:24.335595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.776 [2024-11-20 08:31:24.335609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.776 qpair failed and we were unable to recover it. 00:34:19.776 [2024-11-20 08:31:24.345520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.776 [2024-11-20 08:31:24.345576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.776 [2024-11-20 08:31:24.345589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.776 [2024-11-20 08:31:24.345596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.776 [2024-11-20 08:31:24.345602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.776 [2024-11-20 08:31:24.345615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.776 qpair failed and we were unable to recover it. 00:34:19.776 [2024-11-20 08:31:24.355566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.776 [2024-11-20 08:31:24.355645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.776 [2024-11-20 08:31:24.355658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.776 [2024-11-20 08:31:24.355665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.776 [2024-11-20 08:31:24.355672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.776 [2024-11-20 08:31:24.355685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.776 qpair failed and we were unable to recover it. 00:34:19.776 [2024-11-20 08:31:24.365550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.776 [2024-11-20 08:31:24.365618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.776 [2024-11-20 08:31:24.365631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.776 [2024-11-20 08:31:24.365641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.776 [2024-11-20 08:31:24.365647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.776 [2024-11-20 08:31:24.365661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.776 qpair failed and we were unable to recover it. 00:34:19.776 [2024-11-20 08:31:24.375453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.776 [2024-11-20 08:31:24.375508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.776 [2024-11-20 08:31:24.375521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.776 [2024-11-20 08:31:24.375528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.776 [2024-11-20 08:31:24.375534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.777 [2024-11-20 08:31:24.375547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.777 qpair failed and we were unable to recover it. 00:34:19.777 [2024-11-20 08:31:24.385653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.777 [2024-11-20 08:31:24.385747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.777 [2024-11-20 08:31:24.385760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.777 [2024-11-20 08:31:24.385767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.777 [2024-11-20 08:31:24.385774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.777 [2024-11-20 08:31:24.385787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.777 qpair failed and we were unable to recover it. 00:34:19.777 [2024-11-20 08:31:24.395540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.777 [2024-11-20 08:31:24.395596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.777 [2024-11-20 08:31:24.395611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.777 [2024-11-20 08:31:24.395618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.777 [2024-11-20 08:31:24.395624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.777 [2024-11-20 08:31:24.395638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.777 qpair failed and we were unable to recover it. 00:34:19.777 [2024-11-20 08:31:24.405672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.777 [2024-11-20 08:31:24.405724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.777 [2024-11-20 08:31:24.405738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.777 [2024-11-20 08:31:24.405745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.777 [2024-11-20 08:31:24.405751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.777 [2024-11-20 08:31:24.405772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.777 qpair failed and we were unable to recover it. 00:34:19.777 [2024-11-20 08:31:24.415672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.777 [2024-11-20 08:31:24.415716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.777 [2024-11-20 08:31:24.415729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.777 [2024-11-20 08:31:24.415736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.777 [2024-11-20 08:31:24.415742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.777 [2024-11-20 08:31:24.415755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.777 qpair failed and we were unable to recover it. 00:34:19.777 [2024-11-20 08:31:24.425739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.777 [2024-11-20 08:31:24.425822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.777 [2024-11-20 08:31:24.425834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.777 [2024-11-20 08:31:24.425842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.777 [2024-11-20 08:31:24.425848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.777 [2024-11-20 08:31:24.425866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.777 qpair failed and we were unable to recover it. 00:34:19.777 [2024-11-20 08:31:24.435767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.777 [2024-11-20 08:31:24.435820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.777 [2024-11-20 08:31:24.435832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.777 [2024-11-20 08:31:24.435839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.777 [2024-11-20 08:31:24.435845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.777 [2024-11-20 08:31:24.435859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.777 qpair failed and we were unable to recover it. 00:34:19.777 [2024-11-20 08:31:24.445796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.777 [2024-11-20 08:31:24.445854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.777 [2024-11-20 08:31:24.445872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.777 [2024-11-20 08:31:24.445879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.777 [2024-11-20 08:31:24.445887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.777 [2024-11-20 08:31:24.445901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.777 qpair failed and we were unable to recover it. 00:34:19.777 [2024-11-20 08:31:24.455797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.777 [2024-11-20 08:31:24.455848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.777 [2024-11-20 08:31:24.455865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.777 [2024-11-20 08:31:24.455872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.777 [2024-11-20 08:31:24.455879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.777 [2024-11-20 08:31:24.455893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.777 qpair failed and we were unable to recover it. 00:34:19.777 [2024-11-20 08:31:24.465846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.777 [2024-11-20 08:31:24.465933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.777 [2024-11-20 08:31:24.465946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.777 [2024-11-20 08:31:24.465953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.777 [2024-11-20 08:31:24.465959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.777 [2024-11-20 08:31:24.465972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.777 qpair failed and we were unable to recover it. 00:34:19.777 [2024-11-20 08:31:24.475902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.777 [2024-11-20 08:31:24.475956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.777 [2024-11-20 08:31:24.475970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.777 [2024-11-20 08:31:24.475977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.777 [2024-11-20 08:31:24.475983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.777 [2024-11-20 08:31:24.475997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.777 qpair failed and we were unable to recover it. 00:34:19.777 [2024-11-20 08:31:24.485915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.777 [2024-11-20 08:31:24.485968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.777 [2024-11-20 08:31:24.485981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.777 [2024-11-20 08:31:24.485988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.777 [2024-11-20 08:31:24.485994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.777 [2024-11-20 08:31:24.486008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.777 qpair failed and we were unable to recover it. 00:34:19.777 [2024-11-20 08:31:24.495902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.777 [2024-11-20 08:31:24.495949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.777 [2024-11-20 08:31:24.495962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.777 [2024-11-20 08:31:24.495972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.777 [2024-11-20 08:31:24.495978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:19.777 [2024-11-20 08:31:24.495992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.777 qpair failed and we were unable to recover it. 00:34:20.040 [2024-11-20 08:31:24.505955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.040 [2024-11-20 08:31:24.506012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.040 [2024-11-20 08:31:24.506025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.040 [2024-11-20 08:31:24.506032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.040 [2024-11-20 08:31:24.506039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.040 [2024-11-20 08:31:24.506052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-11-20 08:31:24.516008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.040 [2024-11-20 08:31:24.516066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.040 [2024-11-20 08:31:24.516079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.040 [2024-11-20 08:31:24.516085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.040 [2024-11-20 08:31:24.516092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.040 [2024-11-20 08:31:24.516105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-11-20 08:31:24.526016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.040 [2024-11-20 08:31:24.526067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.040 [2024-11-20 08:31:24.526080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.040 [2024-11-20 08:31:24.526087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.040 [2024-11-20 08:31:24.526093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.040 [2024-11-20 08:31:24.526107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-11-20 08:31:24.536039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.040 [2024-11-20 08:31:24.536085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.040 [2024-11-20 08:31:24.536099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.040 [2024-11-20 08:31:24.536105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.040 [2024-11-20 08:31:24.536112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.040 [2024-11-20 08:31:24.536128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-11-20 08:31:24.546128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.040 [2024-11-20 08:31:24.546180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.040 [2024-11-20 08:31:24.546193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.040 [2024-11-20 08:31:24.546200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.040 [2024-11-20 08:31:24.546207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.040 [2024-11-20 08:31:24.546220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-11-20 08:31:24.556049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.040 [2024-11-20 08:31:24.556106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.040 [2024-11-20 08:31:24.556119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.040 [2024-11-20 08:31:24.556126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.040 [2024-11-20 08:31:24.556133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.040 [2024-11-20 08:31:24.556147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-11-20 08:31:24.566121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.040 [2024-11-20 08:31:24.566172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.040 [2024-11-20 08:31:24.566185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.040 [2024-11-20 08:31:24.566192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.040 [2024-11-20 08:31:24.566198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.040 [2024-11-20 08:31:24.566212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-11-20 08:31:24.575999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.040 [2024-11-20 08:31:24.576049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.040 [2024-11-20 08:31:24.576063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.040 [2024-11-20 08:31:24.576070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.040 [2024-11-20 08:31:24.576077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.040 [2024-11-20 08:31:24.576091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-11-20 08:31:24.586198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.040 [2024-11-20 08:31:24.586259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.040 [2024-11-20 08:31:24.586274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.040 [2024-11-20 08:31:24.586281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.040 [2024-11-20 08:31:24.586288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.040 [2024-11-20 08:31:24.586305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-11-20 08:31:24.596207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.040 [2024-11-20 08:31:24.596282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.040 [2024-11-20 08:31:24.596295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.040 [2024-11-20 08:31:24.596302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.040 [2024-11-20 08:31:24.596309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.040 [2024-11-20 08:31:24.596322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.040 qpair failed and we were unable to recover it. 00:34:20.040 [2024-11-20 08:31:24.606256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.040 [2024-11-20 08:31:24.606308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.040 [2024-11-20 08:31:24.606321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.040 [2024-11-20 08:31:24.606328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.041 [2024-11-20 08:31:24.606334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.041 [2024-11-20 08:31:24.606347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-11-20 08:31:24.616228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.041 [2024-11-20 08:31:24.616271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.041 [2024-11-20 08:31:24.616284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.041 [2024-11-20 08:31:24.616291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.041 [2024-11-20 08:31:24.616297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.041 [2024-11-20 08:31:24.616311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-11-20 08:31:24.626304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.041 [2024-11-20 08:31:24.626357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.041 [2024-11-20 08:31:24.626371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.041 [2024-11-20 08:31:24.626381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.041 [2024-11-20 08:31:24.626388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.041 [2024-11-20 08:31:24.626401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-11-20 08:31:24.636336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.041 [2024-11-20 08:31:24.636406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.041 [2024-11-20 08:31:24.636419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.041 [2024-11-20 08:31:24.636426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.041 [2024-11-20 08:31:24.636432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.041 [2024-11-20 08:31:24.636446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-11-20 08:31:24.646325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.041 [2024-11-20 08:31:24.646380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.041 [2024-11-20 08:31:24.646393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.041 [2024-11-20 08:31:24.646400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.041 [2024-11-20 08:31:24.646406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.041 [2024-11-20 08:31:24.646419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-11-20 08:31:24.656333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.041 [2024-11-20 08:31:24.656421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.041 [2024-11-20 08:31:24.656434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.041 [2024-11-20 08:31:24.656441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.041 [2024-11-20 08:31:24.656447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.041 [2024-11-20 08:31:24.656460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-11-20 08:31:24.666386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.041 [2024-11-20 08:31:24.666474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.041 [2024-11-20 08:31:24.666487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.041 [2024-11-20 08:31:24.666495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.041 [2024-11-20 08:31:24.666501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.041 [2024-11-20 08:31:24.666519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-11-20 08:31:24.676441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.041 [2024-11-20 08:31:24.676493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.041 [2024-11-20 08:31:24.676505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.041 [2024-11-20 08:31:24.676512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.041 [2024-11-20 08:31:24.676519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.041 [2024-11-20 08:31:24.676531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-11-20 08:31:24.686488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.041 [2024-11-20 08:31:24.686579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.041 [2024-11-20 08:31:24.686594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.041 [2024-11-20 08:31:24.686601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.041 [2024-11-20 08:31:24.686607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.041 [2024-11-20 08:31:24.686621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-11-20 08:31:24.696453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.041 [2024-11-20 08:31:24.696502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.041 [2024-11-20 08:31:24.696515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.041 [2024-11-20 08:31:24.696521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.041 [2024-11-20 08:31:24.696528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.041 [2024-11-20 08:31:24.696542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.041 qpair failed and we were unable to recover it. 00:34:20.041 [2024-11-20 08:31:24.706509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.041 [2024-11-20 08:31:24.706574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.041 [2024-11-20 08:31:24.706588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.041 [2024-11-20 08:31:24.706595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.041 [2024-11-20 08:31:24.706601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.042 [2024-11-20 08:31:24.706614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-11-20 08:31:24.716419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.042 [2024-11-20 08:31:24.716479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.042 [2024-11-20 08:31:24.716504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.042 [2024-11-20 08:31:24.716513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.042 [2024-11-20 08:31:24.716520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.042 [2024-11-20 08:31:24.716540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-11-20 08:31:24.726539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.042 [2024-11-20 08:31:24.726590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.042 [2024-11-20 08:31:24.726605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.042 [2024-11-20 08:31:24.726612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.042 [2024-11-20 08:31:24.726619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.042 [2024-11-20 08:31:24.726633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-11-20 08:31:24.736561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.042 [2024-11-20 08:31:24.736612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.042 [2024-11-20 08:31:24.736636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.042 [2024-11-20 08:31:24.736645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.042 [2024-11-20 08:31:24.736652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.042 [2024-11-20 08:31:24.736670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-11-20 08:31:24.746611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.042 [2024-11-20 08:31:24.746675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.042 [2024-11-20 08:31:24.746700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.042 [2024-11-20 08:31:24.746708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.042 [2024-11-20 08:31:24.746715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.042 [2024-11-20 08:31:24.746734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.042 [2024-11-20 08:31:24.756500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.042 [2024-11-20 08:31:24.756550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.042 [2024-11-20 08:31:24.756565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.042 [2024-11-20 08:31:24.756577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.042 [2024-11-20 08:31:24.756583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.042 [2024-11-20 08:31:24.756598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.042 qpair failed and we were unable to recover it. 00:34:20.304 [2024-11-20 08:31:24.766679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.304 [2024-11-20 08:31:24.766736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.304 [2024-11-20 08:31:24.766750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.304 [2024-11-20 08:31:24.766758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.304 [2024-11-20 08:31:24.766764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.304 [2024-11-20 08:31:24.766778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-11-20 08:31:24.776665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.304 [2024-11-20 08:31:24.776716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.304 [2024-11-20 08:31:24.776730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.304 [2024-11-20 08:31:24.776737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.304 [2024-11-20 08:31:24.776743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.304 [2024-11-20 08:31:24.776757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-11-20 08:31:24.786731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.304 [2024-11-20 08:31:24.786825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.304 [2024-11-20 08:31:24.786839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.304 [2024-11-20 08:31:24.786845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.304 [2024-11-20 08:31:24.786852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.304 [2024-11-20 08:31:24.786870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-11-20 08:31:24.796749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.304 [2024-11-20 08:31:24.796800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.304 [2024-11-20 08:31:24.796813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.304 [2024-11-20 08:31:24.796820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.304 [2024-11-20 08:31:24.796826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.304 [2024-11-20 08:31:24.796844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-11-20 08:31:24.806794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.305 [2024-11-20 08:31:24.806847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.305 [2024-11-20 08:31:24.806867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.305 [2024-11-20 08:31:24.806874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.305 [2024-11-20 08:31:24.806881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.305 [2024-11-20 08:31:24.806895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-11-20 08:31:24.816771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.305 [2024-11-20 08:31:24.816822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.305 [2024-11-20 08:31:24.816835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.305 [2024-11-20 08:31:24.816842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.305 [2024-11-20 08:31:24.816848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.305 [2024-11-20 08:31:24.816866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-11-20 08:31:24.826836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.305 [2024-11-20 08:31:24.826916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.305 [2024-11-20 08:31:24.826929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.305 [2024-11-20 08:31:24.826936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.305 [2024-11-20 08:31:24.826942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.305 [2024-11-20 08:31:24.826956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-11-20 08:31:24.836709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.305 [2024-11-20 08:31:24.836773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.305 [2024-11-20 08:31:24.836787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.305 [2024-11-20 08:31:24.836793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.305 [2024-11-20 08:31:24.836799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.305 [2024-11-20 08:31:24.836813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-11-20 08:31:24.846888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.305 [2024-11-20 08:31:24.846943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.305 [2024-11-20 08:31:24.846956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.305 [2024-11-20 08:31:24.846963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.305 [2024-11-20 08:31:24.846969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.305 [2024-11-20 08:31:24.846983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-11-20 08:31:24.856895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.305 [2024-11-20 08:31:24.856944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.305 [2024-11-20 08:31:24.856957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.305 [2024-11-20 08:31:24.856964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.305 [2024-11-20 08:31:24.856970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.305 [2024-11-20 08:31:24.856983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-11-20 08:31:24.866956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.305 [2024-11-20 08:31:24.867028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.305 [2024-11-20 08:31:24.867041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.305 [2024-11-20 08:31:24.867048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.305 [2024-11-20 08:31:24.867054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.305 [2024-11-20 08:31:24.867067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-11-20 08:31:24.876926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.305 [2024-11-20 08:31:24.876980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.305 [2024-11-20 08:31:24.876993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.305 [2024-11-20 08:31:24.877000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.305 [2024-11-20 08:31:24.877006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.305 [2024-11-20 08:31:24.877020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-11-20 08:31:24.887028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.305 [2024-11-20 08:31:24.887078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.305 [2024-11-20 08:31:24.887092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.305 [2024-11-20 08:31:24.887102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.305 [2024-11-20 08:31:24.887109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.305 [2024-11-20 08:31:24.887123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-11-20 08:31:24.896997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.305 [2024-11-20 08:31:24.897049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.305 [2024-11-20 08:31:24.897064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.305 [2024-11-20 08:31:24.897071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.305 [2024-11-20 08:31:24.897077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.305 [2024-11-20 08:31:24.897091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-11-20 08:31:24.907069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.305 [2024-11-20 08:31:24.907147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.305 [2024-11-20 08:31:24.907160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.305 [2024-11-20 08:31:24.907167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.305 [2024-11-20 08:31:24.907173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.306 [2024-11-20 08:31:24.907187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-11-20 08:31:24.917049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.306 [2024-11-20 08:31:24.917100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.306 [2024-11-20 08:31:24.917112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.306 [2024-11-20 08:31:24.917119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.306 [2024-11-20 08:31:24.917125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.306 [2024-11-20 08:31:24.917139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-11-20 08:31:24.927116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.306 [2024-11-20 08:31:24.927173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.306 [2024-11-20 08:31:24.927186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.306 [2024-11-20 08:31:24.927193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.306 [2024-11-20 08:31:24.927199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.306 [2024-11-20 08:31:24.927216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-11-20 08:31:24.937106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.306 [2024-11-20 08:31:24.937169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.306 [2024-11-20 08:31:24.937182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.306 [2024-11-20 08:31:24.937189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.306 [2024-11-20 08:31:24.937195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.306 [2024-11-20 08:31:24.937209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-11-20 08:31:24.947070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.306 [2024-11-20 08:31:24.947126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.306 [2024-11-20 08:31:24.947140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.306 [2024-11-20 08:31:24.947147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.306 [2024-11-20 08:31:24.947153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.306 [2024-11-20 08:31:24.947171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-11-20 08:31:24.957151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.306 [2024-11-20 08:31:24.957203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.306 [2024-11-20 08:31:24.957217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.306 [2024-11-20 08:31:24.957224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.306 [2024-11-20 08:31:24.957231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.306 [2024-11-20 08:31:24.957244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-11-20 08:31:24.967101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.306 [2024-11-20 08:31:24.967155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.306 [2024-11-20 08:31:24.967169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.306 [2024-11-20 08:31:24.967175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.306 [2024-11-20 08:31:24.967182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.306 [2024-11-20 08:31:24.967196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-11-20 08:31:24.977194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.306 [2024-11-20 08:31:24.977247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.306 [2024-11-20 08:31:24.977261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.306 [2024-11-20 08:31:24.977267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.306 [2024-11-20 08:31:24.977274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.306 [2024-11-20 08:31:24.977287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-11-20 08:31:24.987295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.306 [2024-11-20 08:31:24.987351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.306 [2024-11-20 08:31:24.987365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.306 [2024-11-20 08:31:24.987372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.306 [2024-11-20 08:31:24.987378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.306 [2024-11-20 08:31:24.987391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-11-20 08:31:24.997175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.306 [2024-11-20 08:31:24.997232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.306 [2024-11-20 08:31:24.997245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.306 [2024-11-20 08:31:24.997251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.306 [2024-11-20 08:31:24.997258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.306 [2024-11-20 08:31:24.997271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-11-20 08:31:25.007347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.306 [2024-11-20 08:31:25.007403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.306 [2024-11-20 08:31:25.007416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.306 [2024-11-20 08:31:25.007423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.306 [2024-11-20 08:31:25.007430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.306 [2024-11-20 08:31:25.007443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-11-20 08:31:25.017337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.306 [2024-11-20 08:31:25.017405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.306 [2024-11-20 08:31:25.017418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.306 [2024-11-20 08:31:25.017429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.306 [2024-11-20 08:31:25.017435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.306 [2024-11-20 08:31:25.017448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-11-20 08:31:25.027313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.307 [2024-11-20 08:31:25.027402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.307 [2024-11-20 08:31:25.027415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.307 [2024-11-20 08:31:25.027422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.307 [2024-11-20 08:31:25.027428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.307 [2024-11-20 08:31:25.027441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.307 qpair failed and we were unable to recover it. 00:34:20.569 [2024-11-20 08:31:25.037394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.569 [2024-11-20 08:31:25.037446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.569 [2024-11-20 08:31:25.037459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.569 [2024-11-20 08:31:25.037466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.569 [2024-11-20 08:31:25.037472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.570 [2024-11-20 08:31:25.037486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-11-20 08:31:25.047320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.570 [2024-11-20 08:31:25.047390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.570 [2024-11-20 08:31:25.047403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.570 [2024-11-20 08:31:25.047410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.570 [2024-11-20 08:31:25.047417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.570 [2024-11-20 08:31:25.047431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-11-20 08:31:25.057408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.570 [2024-11-20 08:31:25.057457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.570 [2024-11-20 08:31:25.057470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.570 [2024-11-20 08:31:25.057477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.570 [2024-11-20 08:31:25.057483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.570 [2024-11-20 08:31:25.057501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-11-20 08:31:25.067544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.570 [2024-11-20 08:31:25.067600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.570 [2024-11-20 08:31:25.067614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.570 [2024-11-20 08:31:25.067621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.570 [2024-11-20 08:31:25.067627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.570 [2024-11-20 08:31:25.067641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-11-20 08:31:25.077459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.570 [2024-11-20 08:31:25.077512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.570 [2024-11-20 08:31:25.077525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.570 [2024-11-20 08:31:25.077532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.570 [2024-11-20 08:31:25.077538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.570 [2024-11-20 08:31:25.077552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-11-20 08:31:25.087536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.570 [2024-11-20 08:31:25.087589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.570 [2024-11-20 08:31:25.087602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.570 [2024-11-20 08:31:25.087609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.570 [2024-11-20 08:31:25.087615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.570 [2024-11-20 08:31:25.087628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-11-20 08:31:25.097436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.570 [2024-11-20 08:31:25.097491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.570 [2024-11-20 08:31:25.097505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.570 [2024-11-20 08:31:25.097512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.570 [2024-11-20 08:31:25.097518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.570 [2024-11-20 08:31:25.097531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-11-20 08:31:25.107655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.570 [2024-11-20 08:31:25.107717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.570 [2024-11-20 08:31:25.107732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.570 [2024-11-20 08:31:25.107739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.570 [2024-11-20 08:31:25.107745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.570 [2024-11-20 08:31:25.107759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-11-20 08:31:25.117631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.570 [2024-11-20 08:31:25.117682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.570 [2024-11-20 08:31:25.117695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.570 [2024-11-20 08:31:25.117702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.570 [2024-11-20 08:31:25.117708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.570 [2024-11-20 08:31:25.117721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-11-20 08:31:25.127685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.570 [2024-11-20 08:31:25.127738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.570 [2024-11-20 08:31:25.127752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.570 [2024-11-20 08:31:25.127759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.570 [2024-11-20 08:31:25.127765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.570 [2024-11-20 08:31:25.127778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-11-20 08:31:25.137687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.570 [2024-11-20 08:31:25.137739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.570 [2024-11-20 08:31:25.137753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.570 [2024-11-20 08:31:25.137760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.570 [2024-11-20 08:31:25.137766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.570 [2024-11-20 08:31:25.137779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-11-20 08:31:25.147739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.570 [2024-11-20 08:31:25.147833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.570 [2024-11-20 08:31:25.147846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.570 [2024-11-20 08:31:25.147856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.570 [2024-11-20 08:31:25.147867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.571 [2024-11-20 08:31:25.147881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-11-20 08:31:25.157739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.571 [2024-11-20 08:31:25.157790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.571 [2024-11-20 08:31:25.157803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.571 [2024-11-20 08:31:25.157810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.571 [2024-11-20 08:31:25.157816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.571 [2024-11-20 08:31:25.157830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-11-20 08:31:25.167775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.571 [2024-11-20 08:31:25.167832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.571 [2024-11-20 08:31:25.167845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.571 [2024-11-20 08:31:25.167853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.571 [2024-11-20 08:31:25.167859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.571 [2024-11-20 08:31:25.167878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-11-20 08:31:25.177650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.571 [2024-11-20 08:31:25.177730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.571 [2024-11-20 08:31:25.177745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.571 [2024-11-20 08:31:25.177752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.571 [2024-11-20 08:31:25.177759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.571 [2024-11-20 08:31:25.177774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-11-20 08:31:25.187836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.571 [2024-11-20 08:31:25.187901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.571 [2024-11-20 08:31:25.187915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.571 [2024-11-20 08:31:25.187922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.571 [2024-11-20 08:31:25.187928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.571 [2024-11-20 08:31:25.187945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-11-20 08:31:25.197838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.571 [2024-11-20 08:31:25.197892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.571 [2024-11-20 08:31:25.197906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.571 [2024-11-20 08:31:25.197913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.571 [2024-11-20 08:31:25.197919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.571 [2024-11-20 08:31:25.197933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-11-20 08:31:25.207824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.571 [2024-11-20 08:31:25.207896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.571 [2024-11-20 08:31:25.207909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.571 [2024-11-20 08:31:25.207916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.571 [2024-11-20 08:31:25.207922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.571 [2024-11-20 08:31:25.207936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-11-20 08:31:25.217889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.571 [2024-11-20 08:31:25.217941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.571 [2024-11-20 08:31:25.217954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.571 [2024-11-20 08:31:25.217961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.571 [2024-11-20 08:31:25.217967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.571 [2024-11-20 08:31:25.217980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-11-20 08:31:25.227811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.571 [2024-11-20 08:31:25.227873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.571 [2024-11-20 08:31:25.227886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.571 [2024-11-20 08:31:25.227893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.571 [2024-11-20 08:31:25.227899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.571 [2024-11-20 08:31:25.227913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-11-20 08:31:25.237954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.571 [2024-11-20 08:31:25.238012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.571 [2024-11-20 08:31:25.238025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.571 [2024-11-20 08:31:25.238032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.571 [2024-11-20 08:31:25.238039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.571 [2024-11-20 08:31:25.238053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-11-20 08:31:25.248012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.571 [2024-11-20 08:31:25.248079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.572 [2024-11-20 08:31:25.248092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.572 [2024-11-20 08:31:25.248099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.572 [2024-11-20 08:31:25.248105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.572 [2024-11-20 08:31:25.248119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-11-20 08:31:25.257956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.572 [2024-11-20 08:31:25.258011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.572 [2024-11-20 08:31:25.258024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.572 [2024-11-20 08:31:25.258031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.572 [2024-11-20 08:31:25.258038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.572 [2024-11-20 08:31:25.258052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-11-20 08:31:25.268058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.572 [2024-11-20 08:31:25.268110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.572 [2024-11-20 08:31:25.268123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.572 [2024-11-20 08:31:25.268130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.572 [2024-11-20 08:31:25.268137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.572 [2024-11-20 08:31:25.268150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-11-20 08:31:25.278057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.572 [2024-11-20 08:31:25.278106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.572 [2024-11-20 08:31:25.278119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.572 [2024-11-20 08:31:25.278130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.572 [2024-11-20 08:31:25.278136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.572 [2024-11-20 08:31:25.278150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-11-20 08:31:25.287969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.572 [2024-11-20 08:31:25.288022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.572 [2024-11-20 08:31:25.288035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.572 [2024-11-20 08:31:25.288042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.572 [2024-11-20 08:31:25.288048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.572 [2024-11-20 08:31:25.288062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.848 [2024-11-20 08:31:25.298093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.848 [2024-11-20 08:31:25.298141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.848 [2024-11-20 08:31:25.298154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.848 [2024-11-20 08:31:25.298161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.848 [2024-11-20 08:31:25.298167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.848 [2024-11-20 08:31:25.298181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.848 qpair failed and we were unable to recover it. 00:34:20.848 [2024-11-20 08:31:25.308189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.848 [2024-11-20 08:31:25.308245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.848 [2024-11-20 08:31:25.308259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.308266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.308273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.308286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.318165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.318217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.318230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.318238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.318244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.318261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.328205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.328252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.328265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.328272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.328278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.328292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.338068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.338116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.338130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.338137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.338144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.338162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.348258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.348309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.348323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.348330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.348336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.348350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.358252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.358299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.358312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.358319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.358325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.358339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.368301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.368365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.368379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.368386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.368393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.368410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.378302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.378352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.378366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.378373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.378379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.378393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.388230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.388282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.388295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.388302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.388308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.388321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.398369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.398419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.398432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.398439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.398445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.398459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.408413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.408465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.408478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.408493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.408499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.408513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.418394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.418444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.418456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.418463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.418470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.418483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.428476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.428534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.428547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.428554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.428560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.428573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.438447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.438495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.438509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.438516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.438522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.438535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.448519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.448573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.448585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.448593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.448599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.448616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.458393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.458445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.458458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.458465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.458471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.458485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.468594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.468648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.468661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.468668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.468675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.468688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.478582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.478643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.478656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.478664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.478670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.478683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.488647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.488707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.488721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.488727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.488734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.488746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.498618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.849 [2024-11-20 08:31:25.498680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.849 [2024-11-20 08:31:25.498705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.849 [2024-11-20 08:31:25.498714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.849 [2024-11-20 08:31:25.498720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.849 [2024-11-20 08:31:25.498740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.849 qpair failed and we were unable to recover it. 00:34:20.849 [2024-11-20 08:31:25.508577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.850 [2024-11-20 08:31:25.508641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.850 [2024-11-20 08:31:25.508666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.850 [2024-11-20 08:31:25.508675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.850 [2024-11-20 08:31:25.508681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.850 [2024-11-20 08:31:25.508700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.850 qpair failed and we were unable to recover it. 00:34:20.850 [2024-11-20 08:31:25.518709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.850 [2024-11-20 08:31:25.518766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.850 [2024-11-20 08:31:25.518791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.850 [2024-11-20 08:31:25.518800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.850 [2024-11-20 08:31:25.518807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.850 [2024-11-20 08:31:25.518826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.850 qpair failed and we were unable to recover it. 00:34:20.850 [2024-11-20 08:31:25.528745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.850 [2024-11-20 08:31:25.528799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.850 [2024-11-20 08:31:25.528814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.850 [2024-11-20 08:31:25.528822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.850 [2024-11-20 08:31:25.528828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.850 [2024-11-20 08:31:25.528842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.850 qpair failed and we were unable to recover it. 00:34:20.850 [2024-11-20 08:31:25.538745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.850 [2024-11-20 08:31:25.538794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.850 [2024-11-20 08:31:25.538807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.850 [2024-11-20 08:31:25.538818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.850 [2024-11-20 08:31:25.538825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.850 [2024-11-20 08:31:25.538839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.850 qpair failed and we were unable to recover it. 00:34:20.850 [2024-11-20 08:31:25.548789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.850 [2024-11-20 08:31:25.548843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.850 [2024-11-20 08:31:25.548856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.850 [2024-11-20 08:31:25.548866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.850 [2024-11-20 08:31:25.548873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.850 [2024-11-20 08:31:25.548887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.850 qpair failed and we were unable to recover it. 00:34:20.850 [2024-11-20 08:31:25.558681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.850 [2024-11-20 08:31:25.558728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.850 [2024-11-20 08:31:25.558741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.850 [2024-11-20 08:31:25.558748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.850 [2024-11-20 08:31:25.558754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.850 [2024-11-20 08:31:25.558768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.850 qpair failed and we were unable to recover it. 00:34:20.850 [2024-11-20 08:31:25.568859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.850 [2024-11-20 08:31:25.568921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.850 [2024-11-20 08:31:25.568936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.850 [2024-11-20 08:31:25.568943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.850 [2024-11-20 08:31:25.568952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:20.850 [2024-11-20 08:31:25.568968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.850 qpair failed and we were unable to recover it. 00:34:21.111 [2024-11-20 08:31:25.578884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.111 [2024-11-20 08:31:25.578934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.111 [2024-11-20 08:31:25.578948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.111 [2024-11-20 08:31:25.578955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.111 [2024-11-20 08:31:25.578962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.111 [2024-11-20 08:31:25.578979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.111 qpair failed and we were unable to recover it. 00:34:21.111 [2024-11-20 08:31:25.588792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.111 [2024-11-20 08:31:25.588846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.111 [2024-11-20 08:31:25.588859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.111 [2024-11-20 08:31:25.588872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.111 [2024-11-20 08:31:25.588878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.111 [2024-11-20 08:31:25.588892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.111 qpair failed and we were unable to recover it. 00:34:21.111 [2024-11-20 08:31:25.598924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.111 [2024-11-20 08:31:25.599023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.111 [2024-11-20 08:31:25.599036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.111 [2024-11-20 08:31:25.599043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.111 [2024-11-20 08:31:25.599049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.111 [2024-11-20 08:31:25.599063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.111 qpair failed and we were unable to recover it. 00:34:21.111 [2024-11-20 08:31:25.608960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.111 [2024-11-20 08:31:25.609014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.111 [2024-11-20 08:31:25.609028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.111 [2024-11-20 08:31:25.609035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.111 [2024-11-20 08:31:25.609041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.111 [2024-11-20 08:31:25.609054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.111 qpair failed and we were unable to recover it. 00:34:21.111 [2024-11-20 08:31:25.618961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.111 [2024-11-20 08:31:25.619013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.111 [2024-11-20 08:31:25.619027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.111 [2024-11-20 08:31:25.619033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.111 [2024-11-20 08:31:25.619040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.111 [2024-11-20 08:31:25.619053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.111 qpair failed and we were unable to recover it. 00:34:21.111 [2024-11-20 08:31:25.629031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.111 [2024-11-20 08:31:25.629097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.111 [2024-11-20 08:31:25.629110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.111 [2024-11-20 08:31:25.629117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.111 [2024-11-20 08:31:25.629123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.111 [2024-11-20 08:31:25.629137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.111 qpair failed and we were unable to recover it. 00:34:21.111 [2024-11-20 08:31:25.639030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.111 [2024-11-20 08:31:25.639124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.111 [2024-11-20 08:31:25.639137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.111 [2024-11-20 08:31:25.639144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.111 [2024-11-20 08:31:25.639150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.111 [2024-11-20 08:31:25.639164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.111 qpair failed and we were unable to recover it. 00:34:21.111 [2024-11-20 08:31:25.649012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.111 [2024-11-20 08:31:25.649063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.111 [2024-11-20 08:31:25.649077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.111 [2024-11-20 08:31:25.649084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.111 [2024-11-20 08:31:25.649090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.111 [2024-11-20 08:31:25.649103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.111 qpair failed and we were unable to recover it. 00:34:21.111 [2024-11-20 08:31:25.659161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.112 [2024-11-20 08:31:25.659205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.112 [2024-11-20 08:31:25.659218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.112 [2024-11-20 08:31:25.659225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.112 [2024-11-20 08:31:25.659231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.112 [2024-11-20 08:31:25.659245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.112 qpair failed and we were unable to recover it. 00:34:21.112 [2024-11-20 08:31:25.669168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.112 [2024-11-20 08:31:25.669244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.112 [2024-11-20 08:31:25.669257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.112 [2024-11-20 08:31:25.669267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.112 [2024-11-20 08:31:25.669274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.112 [2024-11-20 08:31:25.669287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.112 qpair failed and we were unable to recover it. 00:34:21.112 [2024-11-20 08:31:25.679134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.112 [2024-11-20 08:31:25.679182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.112 [2024-11-20 08:31:25.679195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.112 [2024-11-20 08:31:25.679202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.112 [2024-11-20 08:31:25.679208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.112 [2024-11-20 08:31:25.679221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.112 qpair failed and we were unable to recover it. 00:34:21.112 [2024-11-20 08:31:25.689198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.112 [2024-11-20 08:31:25.689250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.112 [2024-11-20 08:31:25.689263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.112 [2024-11-20 08:31:25.689270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.112 [2024-11-20 08:31:25.689276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.112 [2024-11-20 08:31:25.689290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.112 qpair failed and we were unable to recover it. 00:34:21.112 [2024-11-20 08:31:25.699147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.112 [2024-11-20 08:31:25.699198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.112 [2024-11-20 08:31:25.699212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.112 [2024-11-20 08:31:25.699219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.112 [2024-11-20 08:31:25.699225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.112 [2024-11-20 08:31:25.699239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.112 qpair failed and we were unable to recover it. 00:34:21.112 [2024-11-20 08:31:25.709244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.112 [2024-11-20 08:31:25.709297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.112 [2024-11-20 08:31:25.709311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.112 [2024-11-20 08:31:25.709317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.112 [2024-11-20 08:31:25.709324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.112 [2024-11-20 08:31:25.709340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.112 qpair failed and we were unable to recover it. 00:34:21.112 [2024-11-20 08:31:25.719120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.112 [2024-11-20 08:31:25.719170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.112 [2024-11-20 08:31:25.719184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.112 [2024-11-20 08:31:25.719191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.112 [2024-11-20 08:31:25.719197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.112 [2024-11-20 08:31:25.719212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.112 qpair failed and we were unable to recover it. 00:34:21.112 [2024-11-20 08:31:25.729162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.112 [2024-11-20 08:31:25.729221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.112 [2024-11-20 08:31:25.729237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.112 [2024-11-20 08:31:25.729244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.112 [2024-11-20 08:31:25.729250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.112 [2024-11-20 08:31:25.729265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.112 qpair failed and we were unable to recover it. 00:34:21.112 [2024-11-20 08:31:25.739286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.112 [2024-11-20 08:31:25.739333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.112 [2024-11-20 08:31:25.739347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.112 [2024-11-20 08:31:25.739354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.112 [2024-11-20 08:31:25.739360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.112 [2024-11-20 08:31:25.739373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.112 qpair failed and we were unable to recover it. 00:34:21.112 [2024-11-20 08:31:25.749344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.112 [2024-11-20 08:31:25.749399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.112 [2024-11-20 08:31:25.749412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.112 [2024-11-20 08:31:25.749419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.112 [2024-11-20 08:31:25.749426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.112 [2024-11-20 08:31:25.749440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.112 qpair failed and we were unable to recover it. 00:34:21.112 [2024-11-20 08:31:25.759224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.112 [2024-11-20 08:31:25.759286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.112 [2024-11-20 08:31:25.759300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.112 [2024-11-20 08:31:25.759306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.112 [2024-11-20 08:31:25.759313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.112 [2024-11-20 08:31:25.759327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.112 qpair failed and we were unable to recover it. 00:34:21.112 [2024-11-20 08:31:25.769416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.112 [2024-11-20 08:31:25.769471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.112 [2024-11-20 08:31:25.769485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.112 [2024-11-20 08:31:25.769491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.112 [2024-11-20 08:31:25.769498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.112 [2024-11-20 08:31:25.769511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.112 qpair failed and we were unable to recover it. 00:34:21.112 [2024-11-20 08:31:25.779408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.112 [2024-11-20 08:31:25.779451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.112 [2024-11-20 08:31:25.779464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.112 [2024-11-20 08:31:25.779471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.112 [2024-11-20 08:31:25.779477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.112 [2024-11-20 08:31:25.779491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.112 qpair failed and we were unable to recover it. 00:34:21.112 [2024-11-20 08:31:25.789467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.113 [2024-11-20 08:31:25.789524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.113 [2024-11-20 08:31:25.789538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.113 [2024-11-20 08:31:25.789545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.113 [2024-11-20 08:31:25.789551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.113 [2024-11-20 08:31:25.789565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.113 qpair failed and we were unable to recover it. 00:34:21.113 [2024-11-20 08:31:25.799469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.113 [2024-11-20 08:31:25.799522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.113 [2024-11-20 08:31:25.799535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.113 [2024-11-20 08:31:25.799545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.113 [2024-11-20 08:31:25.799552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.113 [2024-11-20 08:31:25.799565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.113 qpair failed and we were unable to recover it. 00:34:21.113 [2024-11-20 08:31:25.809510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.113 [2024-11-20 08:31:25.809561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.113 [2024-11-20 08:31:25.809577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.113 [2024-11-20 08:31:25.809584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.113 [2024-11-20 08:31:25.809590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.113 [2024-11-20 08:31:25.809605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.113 qpair failed and we were unable to recover it. 00:34:21.113 [2024-11-20 08:31:25.819367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.113 [2024-11-20 08:31:25.819410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.113 [2024-11-20 08:31:25.819423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.113 [2024-11-20 08:31:25.819430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.113 [2024-11-20 08:31:25.819437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.113 [2024-11-20 08:31:25.819450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.113 qpair failed and we were unable to recover it. 00:34:21.113 [2024-11-20 08:31:25.829572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.113 [2024-11-20 08:31:25.829630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.113 [2024-11-20 08:31:25.829643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.113 [2024-11-20 08:31:25.829650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.113 [2024-11-20 08:31:25.829656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.113 [2024-11-20 08:31:25.829670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.113 qpair failed and we were unable to recover it. 00:34:21.375 [2024-11-20 08:31:25.839574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.375 [2024-11-20 08:31:25.839625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.375 [2024-11-20 08:31:25.839638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.375 [2024-11-20 08:31:25.839645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.375 [2024-11-20 08:31:25.839651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.375 [2024-11-20 08:31:25.839669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.375 qpair failed and we were unable to recover it. 00:34:21.375 [2024-11-20 08:31:25.849664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.375 [2024-11-20 08:31:25.849759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.375 [2024-11-20 08:31:25.849772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.375 [2024-11-20 08:31:25.849780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.375 [2024-11-20 08:31:25.849786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.375 [2024-11-20 08:31:25.849799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.375 qpair failed and we were unable to recover it. 00:34:21.375 [2024-11-20 08:31:25.859566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.375 [2024-11-20 08:31:25.859613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.375 [2024-11-20 08:31:25.859626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.375 [2024-11-20 08:31:25.859633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.375 [2024-11-20 08:31:25.859639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.375 [2024-11-20 08:31:25.859653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.375 qpair failed and we were unable to recover it. 00:34:21.375 [2024-11-20 08:31:25.869603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.375 [2024-11-20 08:31:25.869649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.375 [2024-11-20 08:31:25.869662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.375 [2024-11-20 08:31:25.869669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.375 [2024-11-20 08:31:25.869675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.375 [2024-11-20 08:31:25.869689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.375 qpair failed and we were unable to recover it. 00:34:21.375 [2024-11-20 08:31:25.879647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.375 [2024-11-20 08:31:25.879697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.375 [2024-11-20 08:31:25.879710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.375 [2024-11-20 08:31:25.879717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.375 [2024-11-20 08:31:25.879724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.375 [2024-11-20 08:31:25.879737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.375 qpair failed and we were unable to recover it. 00:34:21.375 [2024-11-20 08:31:25.889707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.375 [2024-11-20 08:31:25.889760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.375 [2024-11-20 08:31:25.889774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.375 [2024-11-20 08:31:25.889780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.375 [2024-11-20 08:31:25.889787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.375 [2024-11-20 08:31:25.889800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.375 qpair failed and we were unable to recover it. 00:34:21.375 [2024-11-20 08:31:25.899685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.375 [2024-11-20 08:31:25.899732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.375 [2024-11-20 08:31:25.899745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.376 [2024-11-20 08:31:25.899752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.376 [2024-11-20 08:31:25.899758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.376 [2024-11-20 08:31:25.899772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.376 qpair failed and we were unable to recover it. 00:34:21.376 [2024-11-20 08:31:25.909720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.376 [2024-11-20 08:31:25.909765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.376 [2024-11-20 08:31:25.909778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.376 [2024-11-20 08:31:25.909785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.376 [2024-11-20 08:31:25.909791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.376 [2024-11-20 08:31:25.909805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.376 qpair failed and we were unable to recover it. 00:34:21.376 [2024-11-20 08:31:25.919628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.376 [2024-11-20 08:31:25.919681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.376 [2024-11-20 08:31:25.919694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.376 [2024-11-20 08:31:25.919701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.376 [2024-11-20 08:31:25.919707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.376 [2024-11-20 08:31:25.919721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.376 qpair failed and we were unable to recover it. 00:34:21.376 [2024-11-20 08:31:25.929680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.376 [2024-11-20 08:31:25.929731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.376 [2024-11-20 08:31:25.929744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.376 [2024-11-20 08:31:25.929754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.376 [2024-11-20 08:31:25.929761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.376 [2024-11-20 08:31:25.929774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.376 qpair failed and we were unable to recover it. 00:34:21.376 [2024-11-20 08:31:25.939802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.376 [2024-11-20 08:31:25.939849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.376 [2024-11-20 08:31:25.939868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.376 [2024-11-20 08:31:25.939875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.376 [2024-11-20 08:31:25.939882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.376 [2024-11-20 08:31:25.939896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.376 qpair failed and we were unable to recover it. 00:34:21.376 [2024-11-20 08:31:25.949828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.376 [2024-11-20 08:31:25.949876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.376 [2024-11-20 08:31:25.949890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.376 [2024-11-20 08:31:25.949897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.376 [2024-11-20 08:31:25.949903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.376 [2024-11-20 08:31:25.949917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.376 qpair failed and we were unable to recover it. 00:34:21.376 [2024-11-20 08:31:25.959854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.376 [2024-11-20 08:31:25.959906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.376 [2024-11-20 08:31:25.959919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.376 [2024-11-20 08:31:25.959926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.376 [2024-11-20 08:31:25.959932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.376 [2024-11-20 08:31:25.959946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.376 qpair failed and we were unable to recover it. 00:34:21.376 [2024-11-20 08:31:25.969895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.376 [2024-11-20 08:31:25.969942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.376 [2024-11-20 08:31:25.969955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.376 [2024-11-20 08:31:25.969962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.376 [2024-11-20 08:31:25.969968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.376 [2024-11-20 08:31:25.969985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.376 qpair failed and we were unable to recover it. 00:34:21.376 [2024-11-20 08:31:25.979891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.376 [2024-11-20 08:31:25.979938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.376 [2024-11-20 08:31:25.979951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.376 [2024-11-20 08:31:25.979958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.376 [2024-11-20 08:31:25.979964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.376 [2024-11-20 08:31:25.979977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.376 qpair failed and we were unable to recover it. 00:34:21.376 [2024-11-20 08:31:25.989948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.376 [2024-11-20 08:31:25.989994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.376 [2024-11-20 08:31:25.990006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.376 [2024-11-20 08:31:25.990013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.376 [2024-11-20 08:31:25.990020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.376 [2024-11-20 08:31:25.990033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.376 qpair failed and we were unable to recover it. 00:34:21.376 [2024-11-20 08:31:25.999959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.376 [2024-11-20 08:31:26.000027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.376 [2024-11-20 08:31:26.000041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.376 [2024-11-20 08:31:26.000048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.376 [2024-11-20 08:31:26.000054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.376 [2024-11-20 08:31:26.000067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.376 qpair failed and we were unable to recover it. 00:34:21.376 [2024-11-20 08:31:26.010043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.376 [2024-11-20 08:31:26.010088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.376 [2024-11-20 08:31:26.010102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.376 [2024-11-20 08:31:26.010109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.376 [2024-11-20 08:31:26.010115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.376 [2024-11-20 08:31:26.010129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.376 qpair failed and we were unable to recover it. 00:34:21.376 [2024-11-20 08:31:26.020078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.376 [2024-11-20 08:31:26.020129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.376 [2024-11-20 08:31:26.020142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.376 [2024-11-20 08:31:26.020149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.376 [2024-11-20 08:31:26.020155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.376 [2024-11-20 08:31:26.020169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.376 qpair failed and we were unable to recover it. 00:34:21.376 [2024-11-20 08:31:26.030038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.376 [2024-11-20 08:31:26.030085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.376 [2024-11-20 08:31:26.030098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.376 [2024-11-20 08:31:26.030105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.377 [2024-11-20 08:31:26.030111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.377 [2024-11-20 08:31:26.030124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.377 qpair failed and we were unable to recover it. 00:34:21.377 [2024-11-20 08:31:26.040083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.377 [2024-11-20 08:31:26.040131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.377 [2024-11-20 08:31:26.040144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.377 [2024-11-20 08:31:26.040151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.377 [2024-11-20 08:31:26.040157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.377 [2024-11-20 08:31:26.040171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.377 qpair failed and we were unable to recover it. 00:34:21.377 [2024-11-20 08:31:26.050114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.377 [2024-11-20 08:31:26.050167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.377 [2024-11-20 08:31:26.050179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.377 [2024-11-20 08:31:26.050186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.377 [2024-11-20 08:31:26.050193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.377 [2024-11-20 08:31:26.050206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.377 qpair failed and we were unable to recover it. 00:34:21.377 [2024-11-20 08:31:26.060123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.377 [2024-11-20 08:31:26.060170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.377 [2024-11-20 08:31:26.060183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.377 [2024-11-20 08:31:26.060193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.377 [2024-11-20 08:31:26.060199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.377 [2024-11-20 08:31:26.060213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.377 qpair failed and we were unable to recover it. 00:34:21.377 [2024-11-20 08:31:26.070017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.377 [2024-11-20 08:31:26.070061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.377 [2024-11-20 08:31:26.070075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.377 [2024-11-20 08:31:26.070082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.377 [2024-11-20 08:31:26.070088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.377 [2024-11-20 08:31:26.070101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.377 qpair failed and we were unable to recover it. 00:34:21.377 [2024-11-20 08:31:26.080247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.377 [2024-11-20 08:31:26.080310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.377 [2024-11-20 08:31:26.080324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.377 [2024-11-20 08:31:26.080331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.377 [2024-11-20 08:31:26.080337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.377 [2024-11-20 08:31:26.080351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.377 qpair failed and we were unable to recover it. 00:34:21.377 [2024-11-20 08:31:26.090209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.377 [2024-11-20 08:31:26.090257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.377 [2024-11-20 08:31:26.090270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.377 [2024-11-20 08:31:26.090276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.377 [2024-11-20 08:31:26.090283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.377 [2024-11-20 08:31:26.090296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.377 qpair failed and we were unable to recover it. 00:34:21.377 [2024-11-20 08:31:26.100224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.638 [2024-11-20 08:31:26.100270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.638 [2024-11-20 08:31:26.100283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.638 [2024-11-20 08:31:26.100292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.638 [2024-11-20 08:31:26.100302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.638 [2024-11-20 08:31:26.100319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-11-20 08:31:26.110230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.638 [2024-11-20 08:31:26.110279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.638 [2024-11-20 08:31:26.110292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.638 [2024-11-20 08:31:26.110299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.638 [2024-11-20 08:31:26.110305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.638 [2024-11-20 08:31:26.110318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.639 [2024-11-20 08:31:26.120157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.639 [2024-11-20 08:31:26.120205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.639 [2024-11-20 08:31:26.120218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.639 [2024-11-20 08:31:26.120225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.639 [2024-11-20 08:31:26.120232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.639 [2024-11-20 08:31:26.120245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-11-20 08:31:26.130367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.639 [2024-11-20 08:31:26.130437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.639 [2024-11-20 08:31:26.130451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.639 [2024-11-20 08:31:26.130458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.639 [2024-11-20 08:31:26.130464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.639 [2024-11-20 08:31:26.130477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-11-20 08:31:26.140358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.639 [2024-11-20 08:31:26.140405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.639 [2024-11-20 08:31:26.140419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.639 [2024-11-20 08:31:26.140426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.639 [2024-11-20 08:31:26.140432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.639 [2024-11-20 08:31:26.140446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-11-20 08:31:26.150352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.639 [2024-11-20 08:31:26.150418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.639 [2024-11-20 08:31:26.150431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.639 [2024-11-20 08:31:26.150438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.639 [2024-11-20 08:31:26.150444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.639 [2024-11-20 08:31:26.150457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-11-20 08:31:26.160412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.639 [2024-11-20 08:31:26.160500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.639 [2024-11-20 08:31:26.160513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.639 [2024-11-20 08:31:26.160520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.639 [2024-11-20 08:31:26.160526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.639 [2024-11-20 08:31:26.160540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-11-20 08:31:26.170457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.639 [2024-11-20 08:31:26.170513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.639 [2024-11-20 08:31:26.170526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.639 [2024-11-20 08:31:26.170533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.639 [2024-11-20 08:31:26.170539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.639 [2024-11-20 08:31:26.170553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-11-20 08:31:26.180453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.639 [2024-11-20 08:31:26.180498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.639 [2024-11-20 08:31:26.180511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.639 [2024-11-20 08:31:26.180518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.639 [2024-11-20 08:31:26.180524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.639 [2024-11-20 08:31:26.180538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-11-20 08:31:26.190500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.639 [2024-11-20 08:31:26.190546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.639 [2024-11-20 08:31:26.190559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.639 [2024-11-20 08:31:26.190570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.639 [2024-11-20 08:31:26.190576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.639 [2024-11-20 08:31:26.190589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-11-20 08:31:26.200515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.639 [2024-11-20 08:31:26.200560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.639 [2024-11-20 08:31:26.200573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.639 [2024-11-20 08:31:26.200580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.639 [2024-11-20 08:31:26.200586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.639 [2024-11-20 08:31:26.200600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-11-20 08:31:26.210561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.639 [2024-11-20 08:31:26.210607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.639 [2024-11-20 08:31:26.210620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.639 [2024-11-20 08:31:26.210627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.639 [2024-11-20 08:31:26.210633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.639 [2024-11-20 08:31:26.210646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-11-20 08:31:26.220547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.639 [2024-11-20 08:31:26.220593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.639 [2024-11-20 08:31:26.220606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.639 [2024-11-20 08:31:26.220613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.639 [2024-11-20 08:31:26.220619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.639 [2024-11-20 08:31:26.220632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-11-20 08:31:26.230459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.639 [2024-11-20 08:31:26.230521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.639 [2024-11-20 08:31:26.230534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.639 [2024-11-20 08:31:26.230541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.639 [2024-11-20 08:31:26.230547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.639 [2024-11-20 08:31:26.230564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-11-20 08:31:26.240623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.639 [2024-11-20 08:31:26.240671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.639 [2024-11-20 08:31:26.240686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.639 [2024-11-20 08:31:26.240693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.639 [2024-11-20 08:31:26.240699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.639 [2024-11-20 08:31:26.240715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-11-20 08:31:26.250672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.640 [2024-11-20 08:31:26.250721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.640 [2024-11-20 08:31:26.250736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.640 [2024-11-20 08:31:26.250743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.640 [2024-11-20 08:31:26.250749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.640 [2024-11-20 08:31:26.250763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-11-20 08:31:26.260674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.640 [2024-11-20 08:31:26.260720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.640 [2024-11-20 08:31:26.260734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.640 [2024-11-20 08:31:26.260741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.640 [2024-11-20 08:31:26.260747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.640 [2024-11-20 08:31:26.260761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-11-20 08:31:26.270694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.640 [2024-11-20 08:31:26.270741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.640 [2024-11-20 08:31:26.270753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.640 [2024-11-20 08:31:26.270760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.640 [2024-11-20 08:31:26.270767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.640 [2024-11-20 08:31:26.270780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-11-20 08:31:26.280732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.640 [2024-11-20 08:31:26.280780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.640 [2024-11-20 08:31:26.280793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.640 [2024-11-20 08:31:26.280800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.640 [2024-11-20 08:31:26.280807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.640 [2024-11-20 08:31:26.280820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-11-20 08:31:26.290801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.640 [2024-11-20 08:31:26.290849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.640 [2024-11-20 08:31:26.290865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.640 [2024-11-20 08:31:26.290872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.640 [2024-11-20 08:31:26.290879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.640 [2024-11-20 08:31:26.290892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-11-20 08:31:26.300746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.640 [2024-11-20 08:31:26.300788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.640 [2024-11-20 08:31:26.300802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.640 [2024-11-20 08:31:26.300809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.640 [2024-11-20 08:31:26.300815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.640 [2024-11-20 08:31:26.300829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-11-20 08:31:26.310807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.640 [2024-11-20 08:31:26.310852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.640 [2024-11-20 08:31:26.310870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.640 [2024-11-20 08:31:26.310877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.640 [2024-11-20 08:31:26.310883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.640 [2024-11-20 08:31:26.310898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-11-20 08:31:26.320833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.640 [2024-11-20 08:31:26.320881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.640 [2024-11-20 08:31:26.320894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.640 [2024-11-20 08:31:26.320904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.640 [2024-11-20 08:31:26.320911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.640 [2024-11-20 08:31:26.320925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-11-20 08:31:26.330900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.640 [2024-11-20 08:31:26.330950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.640 [2024-11-20 08:31:26.330963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.640 [2024-11-20 08:31:26.330970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.640 [2024-11-20 08:31:26.330976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.640 [2024-11-20 08:31:26.330990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-11-20 08:31:26.340894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.640 [2024-11-20 08:31:26.340943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.640 [2024-11-20 08:31:26.340957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.640 [2024-11-20 08:31:26.340964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.640 [2024-11-20 08:31:26.340971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.640 [2024-11-20 08:31:26.340985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-11-20 08:31:26.350911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.640 [2024-11-20 08:31:26.350961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.640 [2024-11-20 08:31:26.350975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.640 [2024-11-20 08:31:26.350982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.640 [2024-11-20 08:31:26.350988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.640 [2024-11-20 08:31:26.351003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-11-20 08:31:26.360947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.640 [2024-11-20 08:31:26.360997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.640 [2024-11-20 08:31:26.361010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.640 [2024-11-20 08:31:26.361016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.640 [2024-11-20 08:31:26.361023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.640 [2024-11-20 08:31:26.361043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.902 [2024-11-20 08:31:26.370989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.902 [2024-11-20 08:31:26.371036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.902 [2024-11-20 08:31:26.371049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.902 [2024-11-20 08:31:26.371056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.902 [2024-11-20 08:31:26.371063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.902 [2024-11-20 08:31:26.371077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.902 qpair failed and we were unable to recover it. 00:34:21.902 [2024-11-20 08:31:26.380858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.902 [2024-11-20 08:31:26.380905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.902 [2024-11-20 08:31:26.380919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.902 [2024-11-20 08:31:26.380927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.902 [2024-11-20 08:31:26.380933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.902 [2024-11-20 08:31:26.380947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.902 qpair failed and we were unable to recover it. 00:34:21.902 [2024-11-20 08:31:26.390895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.902 [2024-11-20 08:31:26.390972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.902 [2024-11-20 08:31:26.390985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.902 [2024-11-20 08:31:26.390993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.902 [2024-11-20 08:31:26.390999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.902 [2024-11-20 08:31:26.391012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.902 qpair failed and we were unable to recover it. 00:34:21.902 [2024-11-20 08:31:26.401082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.902 [2024-11-20 08:31:26.401171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.903 [2024-11-20 08:31:26.401184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.903 [2024-11-20 08:31:26.401191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.903 [2024-11-20 08:31:26.401198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.903 [2024-11-20 08:31:26.401211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.903 qpair failed and we were unable to recover it. 00:34:21.903 [2024-11-20 08:31:26.411104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.903 [2024-11-20 08:31:26.411152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.903 [2024-11-20 08:31:26.411165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.903 [2024-11-20 08:31:26.411172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.903 [2024-11-20 08:31:26.411178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.903 [2024-11-20 08:31:26.411191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.903 qpair failed and we were unable to recover it. 00:34:21.903 [2024-11-20 08:31:26.421094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.903 [2024-11-20 08:31:26.421156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.903 [2024-11-20 08:31:26.421169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.903 [2024-11-20 08:31:26.421176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.903 [2024-11-20 08:31:26.421182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.903 [2024-11-20 08:31:26.421195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.903 qpair failed and we were unable to recover it. 00:34:21.903 [2024-11-20 08:31:26.431002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.903 [2024-11-20 08:31:26.431048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.903 [2024-11-20 08:31:26.431061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.903 [2024-11-20 08:31:26.431068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.903 [2024-11-20 08:31:26.431074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.903 [2024-11-20 08:31:26.431087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.903 qpair failed and we were unable to recover it. 00:34:21.903 [2024-11-20 08:31:26.441160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.903 [2024-11-20 08:31:26.441207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.903 [2024-11-20 08:31:26.441221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.903 [2024-11-20 08:31:26.441229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.903 [2024-11-20 08:31:26.441236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.903 [2024-11-20 08:31:26.441250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.903 qpair failed and we were unable to recover it. 00:34:21.903 [2024-11-20 08:31:26.451216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.903 [2024-11-20 08:31:26.451261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.903 [2024-11-20 08:31:26.451275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.903 [2024-11-20 08:31:26.451285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.903 [2024-11-20 08:31:26.451291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.903 [2024-11-20 08:31:26.451305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.903 qpair failed and we were unable to recover it. 00:34:21.903 [2024-11-20 08:31:26.461199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.903 [2024-11-20 08:31:26.461247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.903 [2024-11-20 08:31:26.461260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.903 [2024-11-20 08:31:26.461267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.903 [2024-11-20 08:31:26.461273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.903 [2024-11-20 08:31:26.461286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.903 qpair failed and we were unable to recover it. 00:34:21.903 [2024-11-20 08:31:26.471227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.903 [2024-11-20 08:31:26.471320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.903 [2024-11-20 08:31:26.471333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.903 [2024-11-20 08:31:26.471340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.903 [2024-11-20 08:31:26.471347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.903 [2024-11-20 08:31:26.471360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.903 qpair failed and we were unable to recover it. 00:34:21.903 [2024-11-20 08:31:26.481253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.903 [2024-11-20 08:31:26.481298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.903 [2024-11-20 08:31:26.481311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.903 [2024-11-20 08:31:26.481318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.903 [2024-11-20 08:31:26.481324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.903 [2024-11-20 08:31:26.481338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.903 qpair failed and we were unable to recover it. 00:34:21.903 [2024-11-20 08:31:26.491216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.903 [2024-11-20 08:31:26.491313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.903 [2024-11-20 08:31:26.491327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.903 [2024-11-20 08:31:26.491334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.903 [2024-11-20 08:31:26.491340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.903 [2024-11-20 08:31:26.491358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.903 qpair failed and we were unable to recover it. 00:34:21.903 [2024-11-20 08:31:26.501338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.903 [2024-11-20 08:31:26.501385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.903 [2024-11-20 08:31:26.501398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.903 [2024-11-20 08:31:26.501405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.903 [2024-11-20 08:31:26.501411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.903 [2024-11-20 08:31:26.501425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.903 qpair failed and we were unable to recover it. 00:34:21.903 [2024-11-20 08:31:26.511356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.903 [2024-11-20 08:31:26.511408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.903 [2024-11-20 08:31:26.511421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.903 [2024-11-20 08:31:26.511428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.903 [2024-11-20 08:31:26.511434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.903 [2024-11-20 08:31:26.511447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.903 qpair failed and we were unable to recover it. 00:34:21.903 [2024-11-20 08:31:26.521393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.903 [2024-11-20 08:31:26.521439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.903 [2024-11-20 08:31:26.521453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.903 [2024-11-20 08:31:26.521460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.903 [2024-11-20 08:31:26.521466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.903 [2024-11-20 08:31:26.521479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.903 qpair failed and we were unable to recover it. 00:34:21.903 [2024-11-20 08:31:26.531309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.904 [2024-11-20 08:31:26.531356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.904 [2024-11-20 08:31:26.531370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.904 [2024-11-20 08:31:26.531378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.904 [2024-11-20 08:31:26.531384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.904 [2024-11-20 08:31:26.531398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.904 qpair failed and we were unable to recover it. 00:34:21.904 [2024-11-20 08:31:26.541435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.904 [2024-11-20 08:31:26.541487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.904 [2024-11-20 08:31:26.541501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.904 [2024-11-20 08:31:26.541508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.904 [2024-11-20 08:31:26.541514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.904 [2024-11-20 08:31:26.541528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.904 qpair failed and we were unable to recover it. 00:34:21.904 [2024-11-20 08:31:26.551461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.904 [2024-11-20 08:31:26.551510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.904 [2024-11-20 08:31:26.551523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.904 [2024-11-20 08:31:26.551530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.904 [2024-11-20 08:31:26.551536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.904 [2024-11-20 08:31:26.551549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.904 qpair failed and we were unable to recover it. 00:34:21.904 [2024-11-20 08:31:26.561495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.904 [2024-11-20 08:31:26.561551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.904 [2024-11-20 08:31:26.561566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.904 [2024-11-20 08:31:26.561574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.904 [2024-11-20 08:31:26.561580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.904 [2024-11-20 08:31:26.561594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.904 qpair failed and we were unable to recover it. 00:34:21.904 [2024-11-20 08:31:26.571563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.904 [2024-11-20 08:31:26.571635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.904 [2024-11-20 08:31:26.571648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.904 [2024-11-20 08:31:26.571655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.904 [2024-11-20 08:31:26.571662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.904 [2024-11-20 08:31:26.571675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.904 qpair failed and we were unable to recover it. 00:34:21.904 [2024-11-20 08:31:26.581414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.904 [2024-11-20 08:31:26.581461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.904 [2024-11-20 08:31:26.581474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.904 [2024-11-20 08:31:26.581484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.904 [2024-11-20 08:31:26.581491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.904 [2024-11-20 08:31:26.581504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.904 qpair failed and we were unable to recover it. 00:34:21.904 [2024-11-20 08:31:26.591581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.904 [2024-11-20 08:31:26.591630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.904 [2024-11-20 08:31:26.591643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.904 [2024-11-20 08:31:26.591650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.904 [2024-11-20 08:31:26.591656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.904 [2024-11-20 08:31:26.591670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.904 qpair failed and we were unable to recover it. 00:34:21.904 [2024-11-20 08:31:26.601577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.904 [2024-11-20 08:31:26.601622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.904 [2024-11-20 08:31:26.601635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.904 [2024-11-20 08:31:26.601642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.904 [2024-11-20 08:31:26.601649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.904 [2024-11-20 08:31:26.601662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.904 qpair failed and we were unable to recover it. 00:34:21.904 [2024-11-20 08:31:26.611522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.904 [2024-11-20 08:31:26.611584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.904 [2024-11-20 08:31:26.611597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.904 [2024-11-20 08:31:26.611604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.904 [2024-11-20 08:31:26.611611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.904 [2024-11-20 08:31:26.611624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.904 qpair failed and we were unable to recover it. 00:34:21.904 [2024-11-20 08:31:26.621656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.904 [2024-11-20 08:31:26.621698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.904 [2024-11-20 08:31:26.621711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.904 [2024-11-20 08:31:26.621718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.904 [2024-11-20 08:31:26.621724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:21.904 [2024-11-20 08:31:26.621741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.904 qpair failed and we were unable to recover it. 00:34:22.166 [2024-11-20 08:31:26.631704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.166 [2024-11-20 08:31:26.631799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.166 [2024-11-20 08:31:26.631812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.166 [2024-11-20 08:31:26.631818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.166 [2024-11-20 08:31:26.631825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.166 [2024-11-20 08:31:26.631838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.166 qpair failed and we were unable to recover it. 00:34:22.166 [2024-11-20 08:31:26.641725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.166 [2024-11-20 08:31:26.641770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.166 [2024-11-20 08:31:26.641783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.166 [2024-11-20 08:31:26.641790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.166 [2024-11-20 08:31:26.641797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.166 [2024-11-20 08:31:26.641810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.166 qpair failed and we were unable to recover it. 00:34:22.166 [2024-11-20 08:31:26.651765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.166 [2024-11-20 08:31:26.651815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.166 [2024-11-20 08:31:26.651828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.166 [2024-11-20 08:31:26.651834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.166 [2024-11-20 08:31:26.651841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.166 [2024-11-20 08:31:26.651854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.166 qpair failed and we were unable to recover it. 00:34:22.166 [2024-11-20 08:31:26.661761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.166 [2024-11-20 08:31:26.661806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.166 [2024-11-20 08:31:26.661819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.166 [2024-11-20 08:31:26.661825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.166 [2024-11-20 08:31:26.661832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.166 [2024-11-20 08:31:26.661845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.166 qpair failed and we were unable to recover it. 00:34:22.166 [2024-11-20 08:31:26.671793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.166 [2024-11-20 08:31:26.671841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.166 [2024-11-20 08:31:26.671855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.166 [2024-11-20 08:31:26.671865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.166 [2024-11-20 08:31:26.671872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.166 [2024-11-20 08:31:26.671886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.166 qpair failed and we were unable to recover it. 00:34:22.166 [2024-11-20 08:31:26.681802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.166 [2024-11-20 08:31:26.681929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.166 [2024-11-20 08:31:26.681944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.166 [2024-11-20 08:31:26.681951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.166 [2024-11-20 08:31:26.681957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.166 [2024-11-20 08:31:26.681972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.166 qpair failed and we were unable to recover it. 00:34:22.166 [2024-11-20 08:31:26.691893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.166 [2024-11-20 08:31:26.691941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.166 [2024-11-20 08:31:26.691954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.166 [2024-11-20 08:31:26.691961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.166 [2024-11-20 08:31:26.691967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.166 [2024-11-20 08:31:26.691981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.166 qpair failed and we were unable to recover it. 00:34:22.167 [2024-11-20 08:31:26.701882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.167 [2024-11-20 08:31:26.701926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.167 [2024-11-20 08:31:26.701939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.167 [2024-11-20 08:31:26.701946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.167 [2024-11-20 08:31:26.701952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.167 [2024-11-20 08:31:26.701966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.167 qpair failed and we were unable to recover it. 00:34:22.167 [2024-11-20 08:31:26.711924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.167 [2024-11-20 08:31:26.711970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.167 [2024-11-20 08:31:26.711983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.167 [2024-11-20 08:31:26.711993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.167 [2024-11-20 08:31:26.712000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.167 [2024-11-20 08:31:26.712014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.167 qpair failed and we were unable to recover it. 00:34:22.167 [2024-11-20 08:31:26.721934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.167 [2024-11-20 08:31:26.722019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.167 [2024-11-20 08:31:26.722033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.167 [2024-11-20 08:31:26.722040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.167 [2024-11-20 08:31:26.722047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.167 [2024-11-20 08:31:26.722061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.167 qpair failed and we were unable to recover it. 00:34:22.167 [2024-11-20 08:31:26.731914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.167 [2024-11-20 08:31:26.731978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.167 [2024-11-20 08:31:26.731991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.167 [2024-11-20 08:31:26.731998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.167 [2024-11-20 08:31:26.732005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.167 [2024-11-20 08:31:26.732019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.167 qpair failed and we were unable to recover it. 00:34:22.167 [2024-11-20 08:31:26.741857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.167 [2024-11-20 08:31:26.741966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.167 [2024-11-20 08:31:26.741981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.167 [2024-11-20 08:31:26.741989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.167 [2024-11-20 08:31:26.741999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.167 [2024-11-20 08:31:26.742014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.167 qpair failed and we were unable to recover it. 00:34:22.167 [2024-11-20 08:31:26.752016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.167 [2024-11-20 08:31:26.752065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.167 [2024-11-20 08:31:26.752079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.167 [2024-11-20 08:31:26.752086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.167 [2024-11-20 08:31:26.752092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.167 [2024-11-20 08:31:26.752109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.167 qpair failed and we were unable to recover it. 00:34:22.167 [2024-11-20 08:31:26.762045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.167 [2024-11-20 08:31:26.762099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.167 [2024-11-20 08:31:26.762112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.167 [2024-11-20 08:31:26.762119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.167 [2024-11-20 08:31:26.762125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.167 [2024-11-20 08:31:26.762139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.167 qpair failed and we were unable to recover it. 00:34:22.167 [2024-11-20 08:31:26.772117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.167 [2024-11-20 08:31:26.772167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.167 [2024-11-20 08:31:26.772182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.167 [2024-11-20 08:31:26.772189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.167 [2024-11-20 08:31:26.772195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.167 [2024-11-20 08:31:26.772209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.167 qpair failed and we were unable to recover it. 00:34:22.167 [2024-11-20 08:31:26.782119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.167 [2024-11-20 08:31:26.782164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.167 [2024-11-20 08:31:26.782177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.167 [2024-11-20 08:31:26.782184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.167 [2024-11-20 08:31:26.782190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.167 [2024-11-20 08:31:26.782203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.167 qpair failed and we were unable to recover it. 00:34:22.167 [2024-11-20 08:31:26.792198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.167 [2024-11-20 08:31:26.792260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.167 [2024-11-20 08:31:26.792273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.167 [2024-11-20 08:31:26.792280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.167 [2024-11-20 08:31:26.792286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.167 [2024-11-20 08:31:26.792300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.167 qpair failed and we were unable to recover it. 00:34:22.167 [2024-11-20 08:31:26.802187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.167 [2024-11-20 08:31:26.802235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.167 [2024-11-20 08:31:26.802248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.167 [2024-11-20 08:31:26.802256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.167 [2024-11-20 08:31:26.802262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.167 [2024-11-20 08:31:26.802275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.167 qpair failed and we were unable to recover it. 00:34:22.167 [2024-11-20 08:31:26.812238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.167 [2024-11-20 08:31:26.812305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.167 [2024-11-20 08:31:26.812320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.167 [2024-11-20 08:31:26.812327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.167 [2024-11-20 08:31:26.812334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.167 [2024-11-20 08:31:26.812348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.167 qpair failed and we were unable to recover it. 00:34:22.167 [2024-11-20 08:31:26.822201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.167 [2024-11-20 08:31:26.822284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.167 [2024-11-20 08:31:26.822298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.167 [2024-11-20 08:31:26.822305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.167 [2024-11-20 08:31:26.822311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.167 [2024-11-20 08:31:26.822324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.167 qpair failed and we were unable to recover it. 00:34:22.168 [2024-11-20 08:31:26.832208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.168 [2024-11-20 08:31:26.832270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.168 [2024-11-20 08:31:26.832284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.168 [2024-11-20 08:31:26.832291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.168 [2024-11-20 08:31:26.832297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.168 [2024-11-20 08:31:26.832316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.168 qpair failed and we were unable to recover it. 00:34:22.168 [2024-11-20 08:31:26.842259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.168 [2024-11-20 08:31:26.842304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.168 [2024-11-20 08:31:26.842318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.168 [2024-11-20 08:31:26.842328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.168 [2024-11-20 08:31:26.842335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.168 [2024-11-20 08:31:26.842348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.168 qpair failed and we were unable to recover it. 00:34:22.168 [2024-11-20 08:31:26.852236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.168 [2024-11-20 08:31:26.852288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.168 [2024-11-20 08:31:26.852301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.168 [2024-11-20 08:31:26.852308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.168 [2024-11-20 08:31:26.852314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.168 [2024-11-20 08:31:26.852328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.168 qpair failed and we were unable to recover it. 00:34:22.168 [2024-11-20 08:31:26.862364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.168 [2024-11-20 08:31:26.862445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.168 [2024-11-20 08:31:26.862458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.168 [2024-11-20 08:31:26.862464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.168 [2024-11-20 08:31:26.862471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.168 [2024-11-20 08:31:26.862484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.168 qpair failed and we were unable to recover it. 00:34:22.168 [2024-11-20 08:31:26.872354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.168 [2024-11-20 08:31:26.872418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.168 [2024-11-20 08:31:26.872431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.168 [2024-11-20 08:31:26.872438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.168 [2024-11-20 08:31:26.872444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.168 [2024-11-20 08:31:26.872457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.168 qpair failed and we were unable to recover it. 00:34:22.168 [2024-11-20 08:31:26.882387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.168 [2024-11-20 08:31:26.882436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.168 [2024-11-20 08:31:26.882449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.168 [2024-11-20 08:31:26.882455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.168 [2024-11-20 08:31:26.882462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.168 [2024-11-20 08:31:26.882478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.168 qpair failed and we were unable to recover it. 00:34:22.429 [2024-11-20 08:31:26.892311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.429 [2024-11-20 08:31:26.892362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.429 [2024-11-20 08:31:26.892376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.429 [2024-11-20 08:31:26.892383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.429 [2024-11-20 08:31:26.892389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.429 [2024-11-20 08:31:26.892402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.429 qpair failed and we were unable to recover it. 00:34:22.429 [2024-11-20 08:31:26.902307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.429 [2024-11-20 08:31:26.902369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.429 [2024-11-20 08:31:26.902383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.429 [2024-11-20 08:31:26.902390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.429 [2024-11-20 08:31:26.902396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.429 [2024-11-20 08:31:26.902411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.429 qpair failed and we were unable to recover it. 00:34:22.429 [2024-11-20 08:31:26.912458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.429 [2024-11-20 08:31:26.912511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.429 [2024-11-20 08:31:26.912525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.429 [2024-11-20 08:31:26.912532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.429 [2024-11-20 08:31:26.912538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.429 [2024-11-20 08:31:26.912552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.429 qpair failed and we were unable to recover it. 00:34:22.429 [2024-11-20 08:31:26.922482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.429 [2024-11-20 08:31:26.922531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.429 [2024-11-20 08:31:26.922544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.429 [2024-11-20 08:31:26.922551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.429 [2024-11-20 08:31:26.922557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.429 [2024-11-20 08:31:26.922571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.429 qpair failed and we were unable to recover it. 00:34:22.429 [2024-11-20 08:31:26.932558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.429 [2024-11-20 08:31:26.932609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.429 [2024-11-20 08:31:26.932622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.429 [2024-11-20 08:31:26.932629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.429 [2024-11-20 08:31:26.932635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.429 [2024-11-20 08:31:26.932648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.429 qpair failed and we were unable to recover it. 00:34:22.429 [2024-11-20 08:31:26.942610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.429 [2024-11-20 08:31:26.942681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.429 [2024-11-20 08:31:26.942694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.429 [2024-11-20 08:31:26.942701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.429 [2024-11-20 08:31:26.942707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.429 [2024-11-20 08:31:26.942720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.430 qpair failed and we were unable to recover it. 00:34:22.430 [2024-11-20 08:31:26.952546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.430 [2024-11-20 08:31:26.952592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.430 [2024-11-20 08:31:26.952605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.430 [2024-11-20 08:31:26.952612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.430 [2024-11-20 08:31:26.952618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.430 [2024-11-20 08:31:26.952631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.430 qpair failed and we were unable to recover it. 00:34:22.430 [2024-11-20 08:31:26.962613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.430 [2024-11-20 08:31:26.962658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.430 [2024-11-20 08:31:26.962671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.430 [2024-11-20 08:31:26.962678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.430 [2024-11-20 08:31:26.962684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.430 [2024-11-20 08:31:26.962697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.430 qpair failed and we were unable to recover it. 00:34:22.430 [2024-11-20 08:31:26.972653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.430 [2024-11-20 08:31:26.972730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.430 [2024-11-20 08:31:26.972756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.430 [2024-11-20 08:31:26.972769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.430 [2024-11-20 08:31:26.972776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.430 [2024-11-20 08:31:26.972796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.430 qpair failed and we were unable to recover it. 00:34:22.430 [2024-11-20 08:31:26.982636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.430 [2024-11-20 08:31:26.982686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.430 [2024-11-20 08:31:26.982701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.430 [2024-11-20 08:31:26.982708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.430 [2024-11-20 08:31:26.982715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.430 [2024-11-20 08:31:26.982730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.430 qpair failed and we were unable to recover it. 00:34:22.430 [2024-11-20 08:31:26.992696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.430 [2024-11-20 08:31:26.992745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.430 [2024-11-20 08:31:26.992759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.430 [2024-11-20 08:31:26.992766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.430 [2024-11-20 08:31:26.992772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.430 [2024-11-20 08:31:26.992787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.430 qpair failed and we were unable to recover it. 00:34:22.430 [2024-11-20 08:31:27.002724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.430 [2024-11-20 08:31:27.002771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.430 [2024-11-20 08:31:27.002784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.430 [2024-11-20 08:31:27.002791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.430 [2024-11-20 08:31:27.002797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.430 [2024-11-20 08:31:27.002811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.430 qpair failed and we were unable to recover it. 00:34:22.430 [2024-11-20 08:31:27.012760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.430 [2024-11-20 08:31:27.012811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.430 [2024-11-20 08:31:27.012824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.430 [2024-11-20 08:31:27.012831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.430 [2024-11-20 08:31:27.012837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.430 [2024-11-20 08:31:27.012855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.430 qpair failed and we were unable to recover it. 00:34:22.430 [2024-11-20 08:31:27.022751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.430 [2024-11-20 08:31:27.022800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.430 [2024-11-20 08:31:27.022814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.430 [2024-11-20 08:31:27.022821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.430 [2024-11-20 08:31:27.022827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.430 [2024-11-20 08:31:27.022840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.430 qpair failed and we were unable to recover it. 00:34:22.430 [2024-11-20 08:31:27.032787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.430 [2024-11-20 08:31:27.032833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.430 [2024-11-20 08:31:27.032846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.430 [2024-11-20 08:31:27.032853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.430 [2024-11-20 08:31:27.032860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa2f490 00:34:22.430 [2024-11-20 08:31:27.032877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.430 qpair failed and we were unable to recover it. 00:34:22.430 [2024-11-20 08:31:27.032986] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:34:22.430 A controller has encountered a failure and is being reset. 00:34:22.430 [2024-11-20 08:31:27.033092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2c020 (9): Bad file descriptor 00:34:22.430 Controller properly reset. 00:34:22.430 Initializing NVMe Controllers 00:34:22.430 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:22.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:22.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:22.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:22.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:22.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:22.430 Initialization complete. Launching workers. 00:34:22.430 Starting thread on core 1 00:34:22.430 Starting thread on core 2 00:34:22.430 Starting thread on core 3 00:34:22.430 Starting thread on core 0 00:34:22.430 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:22.430 00:34:22.430 real 0m11.426s 00:34:22.430 user 0m21.775s 00:34:22.430 sys 0m3.555s 00:34:22.430 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.430 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.430 ************************************ 00:34:22.430 END TEST nvmf_target_disconnect_tc2 00:34:22.430 ************************************ 00:34:22.430 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:22.430 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:22.430 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:22.430 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:22.430 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@99 -- # sync 00:34:22.430 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:22.430 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # set +e 00:34:22.430 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:22.430 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:22.430 rmmod nvme_tcp 00:34:22.692 rmmod nvme_fabrics 00:34:22.692 rmmod nvme_keyring 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # set -e 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # return 0 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # '[' -n 2187947 ']' 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@337 -- # killprocess 2187947 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2187947 ']' 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2187947 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2187947 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2187947' 00:34:22.692 killing process with pid 2187947 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2187947 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2187947 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@254 -- # local dev 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@257 -- # remove_target_ns 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:22.692 08:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@258 -- # delete_main_bridge 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@121 -- # return 0 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@274 -- # iptr 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@548 -- # iptables-save 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@548 -- # iptables-restore 00:34:25.251 00:34:25.251 real 0m22.881s 00:34:25.251 user 0m50.028s 00:34:25.251 sys 0m10.424s 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:25.251 ************************************ 00:34:25.251 END TEST nvmf_target_disconnect 00:34:25.251 ************************************ 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@31 -- # [[ tcp == \t\c\p ]] 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.251 ************************************ 00:34:25.251 START TEST nvmf_digest 00:34:25.251 ************************************ 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:25.251 * Looking for test storage... 00:34:25.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:25.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.251 --rc genhtml_branch_coverage=1 00:34:25.251 --rc genhtml_function_coverage=1 00:34:25.251 --rc genhtml_legend=1 00:34:25.251 --rc geninfo_all_blocks=1 00:34:25.251 --rc geninfo_unexecuted_blocks=1 00:34:25.251 00:34:25.251 ' 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:25.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.251 --rc genhtml_branch_coverage=1 00:34:25.251 --rc genhtml_function_coverage=1 00:34:25.251 --rc genhtml_legend=1 00:34:25.251 --rc geninfo_all_blocks=1 00:34:25.251 --rc geninfo_unexecuted_blocks=1 00:34:25.251 00:34:25.251 ' 00:34:25.251 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:25.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.251 --rc genhtml_branch_coverage=1 00:34:25.251 --rc genhtml_function_coverage=1 00:34:25.251 --rc genhtml_legend=1 00:34:25.252 --rc geninfo_all_blocks=1 00:34:25.252 --rc geninfo_unexecuted_blocks=1 00:34:25.252 00:34:25.252 ' 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:25.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.252 --rc genhtml_branch_coverage=1 00:34:25.252 --rc genhtml_function_coverage=1 00:34:25.252 --rc genhtml_legend=1 00:34:25.252 --rc geninfo_all_blocks=1 00:34:25.252 --rc geninfo_unexecuted_blocks=1 00:34:25.252 00:34:25.252 ' 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@50 -- # : 0 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:34:25.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # remove_target_ns 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # xtrace_disable 00:34:25.252 08:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # pci_devs=() 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # local -a pci_devs 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # pci_net_devs=() 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # pci_drivers=() 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # local -A pci_drivers 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # net_devs=() 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # local -ga net_devs 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # e810=() 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # local -ga e810 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # x722=() 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # local -ga x722 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # mlx=() 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # local -ga mlx 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:33.398 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:33.398 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.398 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:33.399 Found net devices under 0000:31:00.0: cvl_0_0 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:33.399 Found net devices under 0000:31:00.1: cvl_0_1 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # is_hw=yes 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@247 -- # create_target_ns 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@27 -- # local -gA dev_map 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@28 -- # local -g _dev 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772161 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:34:33.399 10.0.0.1 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772162 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:34:33.399 10.0.0.2 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:34:33.399 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@38 -- # ping_ips 1 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:33.400 08:31:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:34:33.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:33.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.480 ms 00:34:33.400 00:34:33.400 --- 10.0.0.1 ping statistics --- 00:34:33.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.400 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:34:33.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:33.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:34:33.400 00:34:33.400 --- 10.0.0.2 ping statistics --- 00:34:33.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.400 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair++ )) 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # return 0 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator1 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # return 1 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev= 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@160 -- # return 0 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:34:33.400 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target1 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target1 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # return 1 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev= 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@160 -- # return 0 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:34:33.401 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:33.662 ************************************ 00:34:33.662 START TEST nvmf_digest_clean 00:34:33.662 ************************************ 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@328 -- # nvmfpid=2194021 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@329 -- # waitforlisten 2194021 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2194021 ']' 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:33.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:33.662 08:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:33.662 [2024-11-20 08:31:38.258432] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:34:33.662 [2024-11-20 08:31:38.258483] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:33.662 [2024-11-20 08:31:38.345936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.662 [2024-11-20 08:31:38.383834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:33.662 [2024-11-20 08:31:38.383878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:33.662 [2024-11-20 08:31:38.383887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:33.662 [2024-11-20 08:31:38.383893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:33.662 [2024-11-20 08:31:38.383899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:33.662 [2024-11-20 08:31:38.384509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:34.602 null0 00:34:34.602 [2024-11-20 08:31:39.156962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:34.602 [2024-11-20 08:31:39.181169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:34.602 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2194070 00:34:34.603 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2194070 /var/tmp/bperf.sock 00:34:34.603 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2194070 ']' 00:34:34.603 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:34.603 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:34.603 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:34.603 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:34.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:34.603 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:34.603 08:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:34.603 [2024-11-20 08:31:39.239205] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:34:34.603 [2024-11-20 08:31:39.239254] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194070 ] 00:34:34.862 [2024-11-20 08:31:39.334834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.862 [2024-11-20 08:31:39.370846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.433 08:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.433 08:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:35.433 08:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:35.433 08:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:35.433 08:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:35.694 08:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:35.695 08:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:35.955 nvme0n1 00:34:35.955 08:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:35.955 08:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:36.216 Running I/O for 2 seconds... 00:34:38.099 19691.00 IOPS, 76.92 MiB/s [2024-11-20T07:31:42.828Z] 19682.00 IOPS, 76.88 MiB/s 00:34:38.099 Latency(us) 00:34:38.099 [2024-11-20T07:31:42.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:38.099 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:38.099 nvme0n1 : 2.00 19691.65 76.92 0.00 0.00 6492.75 3085.65 14636.37 00:34:38.099 [2024-11-20T07:31:42.828Z] =================================================================================================================== 00:34:38.099 [2024-11-20T07:31:42.828Z] Total : 19691.65 76.92 0.00 0.00 6492.75 3085.65 14636.37 00:34:38.099 { 00:34:38.099 "results": [ 00:34:38.099 { 00:34:38.099 "job": "nvme0n1", 00:34:38.099 "core_mask": "0x2", 00:34:38.099 "workload": "randread", 00:34:38.099 "status": "finished", 00:34:38.099 "queue_depth": 128, 00:34:38.099 "io_size": 4096, 00:34:38.099 "runtime": 2.004911, 00:34:38.099 "iops": 19691.647160397642, 00:34:38.099 "mibps": 76.92049672030329, 00:34:38.099 "io_failed": 0, 00:34:38.099 "io_timeout": 0, 00:34:38.099 "avg_latency_us": 6492.749292806484, 00:34:38.099 "min_latency_us": 3085.653333333333, 00:34:38.099 "max_latency_us": 14636.373333333333 00:34:38.099 } 00:34:38.099 ], 00:34:38.099 "core_count": 1 00:34:38.099 } 00:34:38.099 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:38.099 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:38.099 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:38.099 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:38.099 | select(.opcode=="crc32c") 00:34:38.099 | "\(.module_name) \(.executed)"' 00:34:38.099 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:38.360 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:38.360 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:38.360 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:38.360 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:38.360 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2194070 00:34:38.360 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2194070 ']' 00:34:38.360 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2194070 00:34:38.360 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:38.360 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:38.360 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2194070 00:34:38.360 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:38.360 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:38.360 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2194070' 00:34:38.360 killing process with pid 2194070 00:34:38.360 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2194070 00:34:38.360 Received shutdown signal, test time was about 2.000000 seconds 00:34:38.360 00:34:38.360 Latency(us) 00:34:38.360 [2024-11-20T07:31:43.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:38.360 [2024-11-20T07:31:43.089Z] =================================================================================================================== 00:34:38.360 [2024-11-20T07:31:43.089Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:38.360 08:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2194070 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2194860 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2194860 /var/tmp/bperf.sock 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2194860 ']' 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:38.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:38.360 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:38.620 [2024-11-20 08:31:43.108408] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:34:38.620 [2024-11-20 08:31:43.108468] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194860 ] 00:34:38.620 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:38.620 Zero copy mechanism will not be used. 00:34:38.620 [2024-11-20 08:31:43.202322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.620 [2024-11-20 08:31:43.238017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:39.191 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:39.191 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:39.191 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:39.191 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:39.191 08:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:39.452 08:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:39.452 08:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:40.022 nvme0n1 00:34:40.022 08:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:40.022 08:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:40.022 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:40.022 Zero copy mechanism will not be used. 00:34:40.022 Running I/O for 2 seconds... 00:34:41.904 3386.00 IOPS, 423.25 MiB/s [2024-11-20T07:31:46.633Z] 3187.50 IOPS, 398.44 MiB/s 00:34:41.904 Latency(us) 00:34:41.904 [2024-11-20T07:31:46.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.904 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:41.904 nvme0n1 : 2.00 3193.03 399.13 0.00 0.00 5008.30 1010.35 14745.60 00:34:41.904 [2024-11-20T07:31:46.633Z] =================================================================================================================== 00:34:41.904 [2024-11-20T07:31:46.633Z] Total : 3193.03 399.13 0.00 0.00 5008.30 1010.35 14745.60 00:34:41.904 { 00:34:41.904 "results": [ 00:34:41.904 { 00:34:41.904 "job": "nvme0n1", 00:34:41.904 "core_mask": "0x2", 00:34:41.904 "workload": "randread", 00:34:41.904 "status": "finished", 00:34:41.904 "queue_depth": 16, 00:34:41.904 "io_size": 131072, 00:34:41.904 "runtime": 2.00155, 00:34:41.904 "iops": 3193.0254053108843, 00:34:41.904 "mibps": 399.12817566386053, 00:34:41.904 "io_failed": 0, 00:34:41.904 "io_timeout": 0, 00:34:41.904 "avg_latency_us": 5008.297318103583, 00:34:41.904 "min_latency_us": 1010.3466666666667, 00:34:41.904 "max_latency_us": 14745.6 00:34:41.904 } 00:34:41.904 ], 00:34:41.904 "core_count": 1 00:34:41.904 } 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:42.166 | select(.opcode=="crc32c") 00:34:42.166 | "\(.module_name) \(.executed)"' 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2194860 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2194860 ']' 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2194860 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2194860 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2194860' 00:34:42.166 killing process with pid 2194860 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2194860 00:34:42.166 Received shutdown signal, test time was about 2.000000 seconds 00:34:42.166 00:34:42.166 Latency(us) 00:34:42.166 [2024-11-20T07:31:46.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.166 [2024-11-20T07:31:46.895Z] =================================================================================================================== 00:34:42.166 [2024-11-20T07:31:46.895Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:42.166 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2194860 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2195680 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2195680 /var/tmp/bperf.sock 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2195680 ']' 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:42.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:42.428 08:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:42.428 [2024-11-20 08:31:47.033454] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:34:42.428 [2024-11-20 08:31:47.033514] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2195680 ] 00:34:42.428 [2024-11-20 08:31:47.123060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.428 [2024-11-20 08:31:47.152554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.370 08:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:43.370 08:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:43.370 08:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:43.370 08:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:43.370 08:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:43.370 08:31:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:43.370 08:31:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:43.631 nvme0n1 00:34:43.631 08:31:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:43.631 08:31:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:43.892 Running I/O for 2 seconds... 00:34:45.866 21601.00 IOPS, 84.38 MiB/s [2024-11-20T07:31:50.595Z] 21684.00 IOPS, 84.70 MiB/s 00:34:45.867 Latency(us) 00:34:45.867 [2024-11-20T07:31:50.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.867 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:45.867 nvme0n1 : 2.01 21705.70 84.79 0.00 0.00 5888.82 1884.16 10321.92 00:34:45.867 [2024-11-20T07:31:50.596Z] =================================================================================================================== 00:34:45.867 [2024-11-20T07:31:50.596Z] Total : 21705.70 84.79 0.00 0.00 5888.82 1884.16 10321.92 00:34:45.867 { 00:34:45.867 "results": [ 00:34:45.867 { 00:34:45.867 "job": "nvme0n1", 00:34:45.867 "core_mask": "0x2", 00:34:45.867 "workload": "randwrite", 00:34:45.867 "status": "finished", 00:34:45.867 "queue_depth": 128, 00:34:45.867 "io_size": 4096, 00:34:45.867 "runtime": 2.006109, 00:34:45.867 "iops": 21705.69993953469, 00:34:45.867 "mibps": 84.78789038880738, 00:34:45.867 "io_failed": 0, 00:34:45.867 "io_timeout": 0, 00:34:45.867 "avg_latency_us": 5888.822291628391, 00:34:45.867 "min_latency_us": 1884.16, 00:34:45.867 "max_latency_us": 10321.92 00:34:45.867 } 00:34:45.867 ], 00:34:45.867 "core_count": 1 00:34:45.867 } 00:34:45.867 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:45.867 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:45.867 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:45.867 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:45.867 | select(.opcode=="crc32c") 00:34:45.867 | "\(.module_name) \(.executed)"' 00:34:45.867 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2195680 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2195680 ']' 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2195680 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2195680 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2195680' 00:34:46.128 killing process with pid 2195680 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2195680 00:34:46.128 Received shutdown signal, test time was about 2.000000 seconds 00:34:46.128 00:34:46.128 Latency(us) 00:34:46.128 [2024-11-20T07:31:50.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:46.128 [2024-11-20T07:31:50.857Z] =================================================================================================================== 00:34:46.128 [2024-11-20T07:31:50.857Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2195680 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2196425 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2196425 /var/tmp/bperf.sock 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2196425 ']' 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:46.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:46.128 08:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:46.128 [2024-11-20 08:31:50.827593] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:34:46.128 [2024-11-20 08:31:50.827652] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196425 ] 00:34:46.128 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:46.128 Zero copy mechanism will not be used. 00:34:46.390 [2024-11-20 08:31:50.915062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.390 [2024-11-20 08:31:50.944663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.960 08:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:46.960 08:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:46.960 08:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:46.960 08:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:46.960 08:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:47.221 08:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:47.221 08:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:47.791 nvme0n1 00:34:47.791 08:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:47.791 08:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:47.791 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:47.791 Zero copy mechanism will not be used. 00:34:47.791 Running I/O for 2 seconds... 00:34:49.679 6705.00 IOPS, 838.12 MiB/s [2024-11-20T07:31:54.408Z] 5503.50 IOPS, 687.94 MiB/s 00:34:49.679 Latency(us) 00:34:49.679 [2024-11-20T07:31:54.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.679 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:49.679 nvme0n1 : 2.00 5503.63 687.95 0.00 0.00 2903.74 1501.87 13052.59 00:34:49.679 [2024-11-20T07:31:54.408Z] =================================================================================================================== 00:34:49.679 [2024-11-20T07:31:54.408Z] Total : 5503.63 687.95 0.00 0.00 2903.74 1501.87 13052.59 00:34:49.679 { 00:34:49.679 "results": [ 00:34:49.679 { 00:34:49.679 "job": "nvme0n1", 00:34:49.679 "core_mask": "0x2", 00:34:49.679 "workload": "randwrite", 00:34:49.679 "status": "finished", 00:34:49.679 "queue_depth": 16, 00:34:49.679 "io_size": 131072, 00:34:49.679 "runtime": 2.003585, 00:34:49.679 "iops": 5503.634734737982, 00:34:49.679 "mibps": 687.9543418422478, 00:34:49.679 "io_failed": 0, 00:34:49.679 "io_timeout": 0, 00:34:49.679 "avg_latency_us": 2903.7412702155316, 00:34:49.679 "min_latency_us": 1501.8666666666666, 00:34:49.679 "max_latency_us": 13052.586666666666 00:34:49.679 } 00:34:49.679 ], 00:34:49.679 "core_count": 1 00:34:49.679 } 00:34:49.679 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:49.679 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:49.679 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:49.679 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:49.679 | select(.opcode=="crc32c") 00:34:49.679 | "\(.module_name) \(.executed)"' 00:34:49.679 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:49.940 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:49.940 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:49.940 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:49.940 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:49.940 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2196425 00:34:49.940 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2196425 ']' 00:34:49.940 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2196425 00:34:49.940 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:49.940 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:49.941 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2196425 00:34:49.941 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:49.941 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:49.941 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2196425' 00:34:49.941 killing process with pid 2196425 00:34:49.941 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2196425 00:34:49.941 Received shutdown signal, test time was about 2.000000 seconds 00:34:49.941 00:34:49.941 Latency(us) 00:34:49.941 [2024-11-20T07:31:54.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.941 [2024-11-20T07:31:54.670Z] =================================================================================================================== 00:34:49.941 [2024-11-20T07:31:54.670Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:49.941 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2196425 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2194021 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2194021 ']' 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2194021 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2194021 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2194021' 00:34:50.202 killing process with pid 2194021 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2194021 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2194021 00:34:50.202 00:34:50.202 real 0m16.689s 00:34:50.202 user 0m33.175s 00:34:50.202 sys 0m3.406s 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:50.202 ************************************ 00:34:50.202 END TEST nvmf_digest_clean 00:34:50.202 ************************************ 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:50.202 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:50.463 ************************************ 00:34:50.463 START TEST nvmf_digest_error 00:34:50.463 ************************************ 00:34:50.463 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:34:50.463 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:50.463 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:50.463 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:50.463 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:50.463 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@328 -- # nvmfpid=2197138 00:34:50.463 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@329 -- # waitforlisten 2197138 00:34:50.463 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:50.463 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2197138 ']' 00:34:50.463 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.463 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:50.463 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:50.464 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:50.464 08:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:50.464 [2024-11-20 08:31:55.030810] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:34:50.464 [2024-11-20 08:31:55.030875] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:50.464 [2024-11-20 08:31:55.118306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.464 [2024-11-20 08:31:55.158108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:50.464 [2024-11-20 08:31:55.158143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:50.464 [2024-11-20 08:31:55.158152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:50.464 [2024-11-20 08:31:55.158158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:50.464 [2024-11-20 08:31:55.158164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:50.464 [2024-11-20 08:31:55.158788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:51.407 [2024-11-20 08:31:55.856805] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:51.407 null0 00:34:51.407 [2024-11-20 08:31:55.939251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:51.407 [2024-11-20 08:31:55.963476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2197447 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2197447 /var/tmp/bperf.sock 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2197447 ']' 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:51.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:51.407 08:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:51.407 [2024-11-20 08:31:56.027581] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:34:51.407 [2024-11-20 08:31:56.027630] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2197447 ] 00:34:51.407 [2024-11-20 08:31:56.117821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.668 [2024-11-20 08:31:56.147734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.240 08:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:52.241 08:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:52.241 08:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:52.241 08:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:52.241 08:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:52.241 08:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.241 08:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:52.501 08:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.501 08:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:52.501 08:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:52.501 nvme0n1 00:34:52.501 08:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:52.501 08:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.501 08:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:52.763 08:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.763 08:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:52.763 08:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:52.763 Running I/O for 2 seconds... 00:34:52.763 [2024-11-20 08:31:57.338910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:52.763 [2024-11-20 08:31:57.338943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.763 [2024-11-20 08:31:57.338952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.763 [2024-11-20 08:31:57.351191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:52.763 [2024-11-20 08:31:57.351212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.763 [2024-11-20 08:31:57.351220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.763 [2024-11-20 08:31:57.363441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:52.763 [2024-11-20 08:31:57.363461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.763 [2024-11-20 08:31:57.363468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.763 [2024-11-20 08:31:57.376521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:52.763 [2024-11-20 08:31:57.376547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.763 [2024-11-20 08:31:57.376554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.763 [2024-11-20 08:31:57.390553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:52.763 [2024-11-20 08:31:57.390572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.763 [2024-11-20 08:31:57.390578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.763 [2024-11-20 08:31:57.402651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:52.763 [2024-11-20 08:31:57.402669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.763 [2024-11-20 08:31:57.402676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.763 [2024-11-20 08:31:57.414463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:52.763 [2024-11-20 08:31:57.414480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.763 [2024-11-20 08:31:57.414487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.763 [2024-11-20 08:31:57.427819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:52.763 [2024-11-20 08:31:57.427836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.763 [2024-11-20 08:31:57.427844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.763 [2024-11-20 08:31:57.441241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:52.763 [2024-11-20 08:31:57.441258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.763 [2024-11-20 08:31:57.441265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.763 [2024-11-20 08:31:57.453547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:52.763 [2024-11-20 08:31:57.453565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.763 [2024-11-20 08:31:57.453571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.763 [2024-11-20 08:31:57.467559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:52.763 [2024-11-20 08:31:57.467577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.764 [2024-11-20 08:31:57.467583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.764 [2024-11-20 08:31:57.480176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:52.764 [2024-11-20 08:31:57.480193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.764 [2024-11-20 08:31:57.480200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.026 [2024-11-20 08:31:57.492835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.026 [2024-11-20 08:31:57.492853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.026 [2024-11-20 08:31:57.492859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.026 [2024-11-20 08:31:57.505038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.026 [2024-11-20 08:31:57.505055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.026 [2024-11-20 08:31:57.505062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.026 [2024-11-20 08:31:57.517933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.026 [2024-11-20 08:31:57.517950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.026 [2024-11-20 08:31:57.517957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.026 [2024-11-20 08:31:57.530089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.026 [2024-11-20 08:31:57.530107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.026 [2024-11-20 08:31:57.530113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.026 [2024-11-20 08:31:57.543190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.026 [2024-11-20 08:31:57.543208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.026 [2024-11-20 08:31:57.543215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.026 [2024-11-20 08:31:57.553790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.026 [2024-11-20 08:31:57.553807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.026 [2024-11-20 08:31:57.553813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.026 [2024-11-20 08:31:57.566895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.026 [2024-11-20 08:31:57.566913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.026 [2024-11-20 08:31:57.566919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.026 [2024-11-20 08:31:57.580586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.026 [2024-11-20 08:31:57.580603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.026 [2024-11-20 08:31:57.580609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.026 [2024-11-20 08:31:57.594140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.026 [2024-11-20 08:31:57.594161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.026 [2024-11-20 08:31:57.594167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.026 [2024-11-20 08:31:57.605072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.026 [2024-11-20 08:31:57.605089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.026 [2024-11-20 08:31:57.605095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.026 [2024-11-20 08:31:57.618072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.026 [2024-11-20 08:31:57.618088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.026 [2024-11-20 08:31:57.618095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.026 [2024-11-20 08:31:57.631186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.026 [2024-11-20 08:31:57.631203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.026 [2024-11-20 08:31:57.631210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.026 [2024-11-20 08:31:57.644704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.026 [2024-11-20 08:31:57.644721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.026 [2024-11-20 08:31:57.644728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.026 [2024-11-20 08:31:57.655264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.026 [2024-11-20 08:31:57.655282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.026 [2024-11-20 08:31:57.655288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.026 [2024-11-20 08:31:57.668692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.026 [2024-11-20 08:31:57.668710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.026 [2024-11-20 08:31:57.668717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.027 [2024-11-20 08:31:57.681318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.027 [2024-11-20 08:31:57.681336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.027 [2024-11-20 08:31:57.681343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.027 [2024-11-20 08:31:57.695056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.027 [2024-11-20 08:31:57.695074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.027 [2024-11-20 08:31:57.695080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.027 [2024-11-20 08:31:57.704373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.027 [2024-11-20 08:31:57.704391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.027 [2024-11-20 08:31:57.704397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.027 [2024-11-20 08:31:57.718255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.027 [2024-11-20 08:31:57.718272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.027 [2024-11-20 08:31:57.718279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.027 [2024-11-20 08:31:57.731382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.027 [2024-11-20 08:31:57.731399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.027 [2024-11-20 08:31:57.731406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.027 [2024-11-20 08:31:57.744059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.027 [2024-11-20 08:31:57.744077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.027 [2024-11-20 08:31:57.744083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.289 [2024-11-20 08:31:57.757380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.289 [2024-11-20 08:31:57.757398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.289 [2024-11-20 08:31:57.757405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.289 [2024-11-20 08:31:57.771127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.289 [2024-11-20 08:31:57.771144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.289 [2024-11-20 08:31:57.771151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.289 [2024-11-20 08:31:57.781893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.289 [2024-11-20 08:31:57.781910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.289 [2024-11-20 08:31:57.781917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.289 [2024-11-20 08:31:57.794895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.289 [2024-11-20 08:31:57.794913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.289 [2024-11-20 08:31:57.794919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.289 [2024-11-20 08:31:57.808390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.289 [2024-11-20 08:31:57.808407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.289 [2024-11-20 08:31:57.808417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:57.819409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:57.819426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:57.819433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:57.831447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:57.831463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:57.831470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:57.845413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:57.845431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:57.845438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:57.857938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:57.857955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:57.857962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:57.871341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:57.871358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:57.871365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:57.884855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:57.884875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:57.884882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:57.896649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:57.896665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:57.896672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:57.908352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:57.908369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:57.908375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:57.922062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:57.922082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:57.922088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:57.933878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:57.933894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:57.933901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:57.945760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:57.945777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:57.945783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:57.960124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:57.960142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:57.960148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:57.973062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:57.973079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:57.973086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:57.985067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:57.985084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:57.985090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:57.996466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:57.996483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:57.996490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.290 [2024-11-20 08:31:58.008018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.290 [2024-11-20 08:31:58.008035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.290 [2024-11-20 08:31:58.008041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.552 [2024-11-20 08:31:58.021196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.552 [2024-11-20 08:31:58.021214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.552 [2024-11-20 08:31:58.021221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.552 [2024-11-20 08:31:58.034726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.552 [2024-11-20 08:31:58.034743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.552 [2024-11-20 08:31:58.034750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.552 [2024-11-20 08:31:58.047344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.552 [2024-11-20 08:31:58.047361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.552 [2024-11-20 08:31:58.047367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.552 [2024-11-20 08:31:58.058370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.552 [2024-11-20 08:31:58.058387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.552 [2024-11-20 08:31:58.058394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.552 [2024-11-20 08:31:58.071624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.552 [2024-11-20 08:31:58.071641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.552 [2024-11-20 08:31:58.071648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.552 [2024-11-20 08:31:58.084579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.552 [2024-11-20 08:31:58.084595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.552 [2024-11-20 08:31:58.084602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.552 [2024-11-20 08:31:58.097375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.552 [2024-11-20 08:31:58.097391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.552 [2024-11-20 08:31:58.097397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.552 [2024-11-20 08:31:58.110204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.552 [2024-11-20 08:31:58.110221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.553 [2024-11-20 08:31:58.110227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.553 [2024-11-20 08:31:58.120908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.553 [2024-11-20 08:31:58.120924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.553 [2024-11-20 08:31:58.120931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.553 [2024-11-20 08:31:58.132510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.553 [2024-11-20 08:31:58.132527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.553 [2024-11-20 08:31:58.132536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.553 [2024-11-20 08:31:58.145774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.553 [2024-11-20 08:31:58.145791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.553 [2024-11-20 08:31:58.145797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.553 [2024-11-20 08:31:58.159377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.553 [2024-11-20 08:31:58.159394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.553 [2024-11-20 08:31:58.159400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.553 [2024-11-20 08:31:58.171989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.553 [2024-11-20 08:31:58.172006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.553 [2024-11-20 08:31:58.172012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.553 [2024-11-20 08:31:58.185709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.553 [2024-11-20 08:31:58.185726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.553 [2024-11-20 08:31:58.185733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.553 [2024-11-20 08:31:58.197569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.553 [2024-11-20 08:31:58.197585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.553 [2024-11-20 08:31:58.197591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.553 [2024-11-20 08:31:58.209670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.553 [2024-11-20 08:31:58.209687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.553 [2024-11-20 08:31:58.209693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.553 [2024-11-20 08:31:58.220388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.553 [2024-11-20 08:31:58.220405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.553 [2024-11-20 08:31:58.220411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.553 [2024-11-20 08:31:58.232545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.553 [2024-11-20 08:31:58.232562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.553 [2024-11-20 08:31:58.232568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.553 [2024-11-20 08:31:58.246455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.553 [2024-11-20 08:31:58.246472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.553 [2024-11-20 08:31:58.246478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.553 [2024-11-20 08:31:58.260179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.553 [2024-11-20 08:31:58.260196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.553 [2024-11-20 08:31:58.260202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.553 [2024-11-20 08:31:58.274413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.553 [2024-11-20 08:31:58.274431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.553 [2024-11-20 08:31:58.274438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.816 [2024-11-20 08:31:58.284038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.816 [2024-11-20 08:31:58.284055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.816 [2024-11-20 08:31:58.284061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.816 [2024-11-20 08:31:58.298813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.816 [2024-11-20 08:31:58.298831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.816 [2024-11-20 08:31:58.298837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.816 [2024-11-20 08:31:58.311249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.816 [2024-11-20 08:31:58.311266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.816 [2024-11-20 08:31:58.311273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.816 20048.00 IOPS, 78.31 MiB/s [2024-11-20T07:31:58.545Z] [2024-11-20 08:31:58.323469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.816 [2024-11-20 08:31:58.323486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.816 [2024-11-20 08:31:58.323492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.817 [2024-11-20 08:31:58.336741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.817 [2024-11-20 08:31:58.336759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.817 [2024-11-20 08:31:58.336765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.817 [2024-11-20 08:31:58.350544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.817 [2024-11-20 08:31:58.350561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.817 [2024-11-20 08:31:58.350571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.817 [2024-11-20 08:31:58.363372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.817 [2024-11-20 08:31:58.363389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.817 [2024-11-20 08:31:58.363395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.817 [2024-11-20 08:31:58.374858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.817 [2024-11-20 08:31:58.374877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.817 [2024-11-20 08:31:58.374884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.817 [2024-11-20 08:31:58.387468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.817 [2024-11-20 08:31:58.387485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.817 [2024-11-20 08:31:58.387491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.817 [2024-11-20 08:31:58.399133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.817 [2024-11-20 08:31:58.399150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.817 [2024-11-20 08:31:58.399156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.817 [2024-11-20 08:31:58.411577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.817 [2024-11-20 08:31:58.411594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.817 [2024-11-20 08:31:58.411600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.817 [2024-11-20 08:31:58.425600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.818 [2024-11-20 08:31:58.425618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.818 [2024-11-20 08:31:58.425624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.818 [2024-11-20 08:31:58.439263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.818 [2024-11-20 08:31:58.439279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.818 [2024-11-20 08:31:58.439285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.818 [2024-11-20 08:31:58.449018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.818 [2024-11-20 08:31:58.449035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.818 [2024-11-20 08:31:58.449041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.818 [2024-11-20 08:31:58.463064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.818 [2024-11-20 08:31:58.463085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.818 [2024-11-20 08:31:58.463092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.818 [2024-11-20 08:31:58.474341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.818 [2024-11-20 08:31:58.474358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.818 [2024-11-20 08:31:58.474364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.818 [2024-11-20 08:31:58.487479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.818 [2024-11-20 08:31:58.487496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.818 [2024-11-20 08:31:58.487502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.818 [2024-11-20 08:31:58.500248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.818 [2024-11-20 08:31:58.500265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.818 [2024-11-20 08:31:58.500271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.818 [2024-11-20 08:31:58.514073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.819 [2024-11-20 08:31:58.514090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.819 [2024-11-20 08:31:58.514096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.819 [2024-11-20 08:31:58.525953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.819 [2024-11-20 08:31:58.525970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.819 [2024-11-20 08:31:58.525976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.819 [2024-11-20 08:31:58.537413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:53.819 [2024-11-20 08:31:58.537430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.819 [2024-11-20 08:31:58.537437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.080 [2024-11-20 08:31:58.550547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.550564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.550571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.563222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.563239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.563246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.575229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.575246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.575252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.587889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.587906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.587912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.601948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.601965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.601971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.613008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.613025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.613031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.625785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.625802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.625809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.638993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.639010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.639016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.651454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.651471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.651477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.663642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.663659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.663666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.674919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.674939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.674946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.689012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.689029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.689036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.703018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.703035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.703042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.714533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.714550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.714556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.726595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.726612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.726618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.740038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.740054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.740061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.753451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.753468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.753475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.764815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.764832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.764838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.777115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.777132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.777138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.789950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.789967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.789973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.081 [2024-11-20 08:31:58.801518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.081 [2024-11-20 08:31:58.801535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.081 [2024-11-20 08:31:58.801541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:58.814038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:58.814055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:58.814062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:58.826922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:58.826938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:58.826945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:58.839915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:58.839932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:58.839938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:58.852087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:58.852103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:58.852110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:58.864495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:58.864512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:58.864518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:58.877933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:58.877950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:58.877956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:58.888378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:58.888395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:58.888404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:58.901817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:58.901834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:58.901840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:58.915488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:58.915505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:58.915512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:58.928349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:58.928366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:58.928373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:58.940679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:58.940697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:58.940703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:58.952042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:58.952059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:58.952066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:58.966285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:58.966302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:58.966309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:58.977395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:58.977412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:58.977419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:58.989499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:58.989517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:58.989524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:59.002180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:59.002204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:59.002211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:59.015049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:59.015066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:59.015072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:59.027929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:59.027946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:59.027953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:59.040476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:59.040493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:59.040500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:59.052918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:59.052935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:59.052941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.344 [2024-11-20 08:31:59.064631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.344 [2024-11-20 08:31:59.064648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.344 [2024-11-20 08:31:59.064655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.609 [2024-11-20 08:31:59.078142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.609 [2024-11-20 08:31:59.078159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.609 [2024-11-20 08:31:59.078166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.609 [2024-11-20 08:31:59.088858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.609 [2024-11-20 08:31:59.088879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.610 [2024-11-20 08:31:59.088886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.610 [2024-11-20 08:31:59.101572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.610 [2024-11-20 08:31:59.101590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.610 [2024-11-20 08:31:59.101596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.610 [2024-11-20 08:31:59.115498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.610 [2024-11-20 08:31:59.115516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.610 [2024-11-20 08:31:59.115523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.610 [2024-11-20 08:31:59.128914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.610 [2024-11-20 08:31:59.128931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.610 [2024-11-20 08:31:59.128938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.610 [2024-11-20 08:31:59.141614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.611 [2024-11-20 08:31:59.141632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.611 [2024-11-20 08:31:59.141638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.611 [2024-11-20 08:31:59.151873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.611 [2024-11-20 08:31:59.151889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.611 [2024-11-20 08:31:59.151896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.611 [2024-11-20 08:31:59.164716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.611 [2024-11-20 08:31:59.164733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.611 [2024-11-20 08:31:59.164740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.611 [2024-11-20 08:31:59.179127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.611 [2024-11-20 08:31:59.179145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.612 [2024-11-20 08:31:59.179151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.612 [2024-11-20 08:31:59.191578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.612 [2024-11-20 08:31:59.191595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.612 [2024-11-20 08:31:59.191602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.612 [2024-11-20 08:31:59.203021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.612 [2024-11-20 08:31:59.203038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.612 [2024-11-20 08:31:59.203045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.612 [2024-11-20 08:31:59.216937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.612 [2024-11-20 08:31:59.216956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.612 [2024-11-20 08:31:59.216963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.612 [2024-11-20 08:31:59.229295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.612 [2024-11-20 08:31:59.229312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.613 [2024-11-20 08:31:59.229318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.613 [2024-11-20 08:31:59.240506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.613 [2024-11-20 08:31:59.240524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.613 [2024-11-20 08:31:59.240530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.613 [2024-11-20 08:31:59.253294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.613 [2024-11-20 08:31:59.253311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.613 [2024-11-20 08:31:59.253318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.613 [2024-11-20 08:31:59.266924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.613 [2024-11-20 08:31:59.266941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.614 [2024-11-20 08:31:59.266948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.614 [2024-11-20 08:31:59.278035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.614 [2024-11-20 08:31:59.278051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.614 [2024-11-20 08:31:59.278058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.614 [2024-11-20 08:31:59.289775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.614 [2024-11-20 08:31:59.289792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.614 [2024-11-20 08:31:59.289799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.614 [2024-11-20 08:31:59.303180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.615 [2024-11-20 08:31:59.303198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-11-20 08:31:59.303204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.615 [2024-11-20 08:31:59.315781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.615 [2024-11-20 08:31:59.315799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-11-20 08:31:59.315805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.615 20211.50 IOPS, 78.95 MiB/s [2024-11-20T07:31:59.344Z] [2024-11-20 08:31:59.325447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x240f3a0) 00:34:54.615 [2024-11-20 08:31:59.325464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-11-20 08:31:59.325470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.615 00:34:54.616 Latency(us) 00:34:54.616 [2024-11-20T07:31:59.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.616 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:54.616 nvme0n1 : 2.00 20234.16 79.04 0.00 0.00 6318.04 2293.76 17148.59 00:34:54.616 [2024-11-20T07:31:59.345Z] =================================================================================================================== 00:34:54.616 [2024-11-20T07:31:59.345Z] Total : 20234.16 79.04 0.00 0.00 6318.04 2293.76 17148.59 00:34:54.616 { 00:34:54.616 "results": [ 00:34:54.616 { 00:34:54.616 "job": "nvme0n1", 00:34:54.616 "core_mask": "0x2", 00:34:54.616 "workload": "randread", 00:34:54.616 "status": "finished", 00:34:54.616 "queue_depth": 128, 00:34:54.616 "io_size": 4096, 00:34:54.616 "runtime": 2.004086, 00:34:54.616 "iops": 20234.161607835194, 00:34:54.616 "mibps": 79.03969378060623, 00:34:54.616 "io_failed": 0, 00:34:54.616 "io_timeout": 0, 00:34:54.616 "avg_latency_us": 6318.035808734679, 00:34:54.616 "min_latency_us": 2293.76, 00:34:54.616 "max_latency_us": 17148.586666666666 00:34:54.616 } 00:34:54.616 ], 00:34:54.616 "core_count": 1 00:34:54.617 } 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:54.884 | .driver_specific 00:34:54.884 | .nvme_error 00:34:54.884 | .status_code 00:34:54.884 | .command_transient_transport_error' 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2197447 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2197447 ']' 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2197447 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2197447 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2197447' 00:34:54.884 killing process with pid 2197447 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2197447 00:34:54.884 Received shutdown signal, test time was about 2.000000 seconds 00:34:54.884 00:34:54.884 Latency(us) 00:34:54.884 [2024-11-20T07:31:59.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.884 [2024-11-20T07:31:59.613Z] =================================================================================================================== 00:34:54.884 [2024-11-20T07:31:59.613Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:54.884 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2197447 00:34:55.147 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:55.147 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:55.147 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:55.147 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:55.147 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:55.147 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2198166 00:34:55.147 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2198166 /var/tmp/bperf.sock 00:34:55.147 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2198166 ']' 00:34:55.147 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:55.147 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:55.147 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:55.147 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:55.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:55.147 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:55.147 08:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:55.147 [2024-11-20 08:31:59.753114] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:34:55.147 [2024-11-20 08:31:59.753189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2198166 ] 00:34:55.147 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:55.147 Zero copy mechanism will not be used. 00:34:55.147 [2024-11-20 08:31:59.843210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.147 [2024-11-20 08:31:59.872851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.089 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.089 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:56.089 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:56.089 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:56.089 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:56.089 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.089 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:56.090 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.090 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.090 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.351 nvme0n1 00:34:56.351 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:56.351 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.351 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:56.351 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.351 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:56.351 08:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:56.351 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:56.351 Zero copy mechanism will not be used. 00:34:56.351 Running I/O for 2 seconds... 00:34:56.614 [2024-11-20 08:32:01.091121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.614 [2024-11-20 08:32:01.091156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.614 [2024-11-20 08:32:01.091166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.614 [2024-11-20 08:32:01.100423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.614 [2024-11-20 08:32:01.100448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.614 [2024-11-20 08:32:01.100456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.614 [2024-11-20 08:32:01.104694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.614 [2024-11-20 08:32:01.104715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.614 [2024-11-20 08:32:01.104722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.614 [2024-11-20 08:32:01.109501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.614 [2024-11-20 08:32:01.109522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.614 [2024-11-20 08:32:01.109528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.614 [2024-11-20 08:32:01.117950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.614 [2024-11-20 08:32:01.117971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.614 [2024-11-20 08:32:01.117978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.614 [2024-11-20 08:32:01.122946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.614 [2024-11-20 08:32:01.122965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.614 [2024-11-20 08:32:01.122972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.614 [2024-11-20 08:32:01.127196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.614 [2024-11-20 08:32:01.127220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.614 [2024-11-20 08:32:01.127233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.614 [2024-11-20 08:32:01.135460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.614 [2024-11-20 08:32:01.135480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.614 [2024-11-20 08:32:01.135487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.614 [2024-11-20 08:32:01.140740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.614 [2024-11-20 08:32:01.140760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.614 [2024-11-20 08:32:01.140767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.614 [2024-11-20 08:32:01.145609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.614 [2024-11-20 08:32:01.145630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.614 [2024-11-20 08:32:01.145636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.614 [2024-11-20 08:32:01.150438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.614 [2024-11-20 08:32:01.150458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.614 [2024-11-20 08:32:01.150464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.614 [2024-11-20 08:32:01.155264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.614 [2024-11-20 08:32:01.155283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.614 [2024-11-20 08:32:01.155290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.614 [2024-11-20 08:32:01.159970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.614 [2024-11-20 08:32:01.159990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.614 [2024-11-20 08:32:01.159998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.614 [2024-11-20 08:32:01.164593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.614 [2024-11-20 08:32:01.164615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.614 [2024-11-20 08:32:01.164622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.614 [2024-11-20 08:32:01.169322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.614 [2024-11-20 08:32:01.169342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.614 [2024-11-20 08:32:01.169348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.614 [2024-11-20 08:32:01.175166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.175186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.175192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.179945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.179964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.179971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.184299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.184319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.184325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.188737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.188756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.188763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.193362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.193381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.193388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.197747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.197767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.197773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.202255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.202275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.202281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.212065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.212085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.212091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.217110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.217129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.217139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.222016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.222035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.222042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.227388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.227407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.227413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.235428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.235447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.235453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.239752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.239771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.239778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.244160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.244179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.244186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.249964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.249983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.249989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.258415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.258436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.258444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.265459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.265480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.265486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.270399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.270422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.270428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.274801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.274820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.274826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.283488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.283508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.283514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.615 [2024-11-20 08:32:01.291756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.615 [2024-11-20 08:32:01.291776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.615 [2024-11-20 08:32:01.291783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.616 [2024-11-20 08:32:01.298583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.616 [2024-11-20 08:32:01.298602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.616 [2024-11-20 08:32:01.298609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.616 [2024-11-20 08:32:01.305625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.616 [2024-11-20 08:32:01.305648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.616 [2024-11-20 08:32:01.305655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.616 [2024-11-20 08:32:01.312167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.616 [2024-11-20 08:32:01.312186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.616 [2024-11-20 08:32:01.312192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.616 [2024-11-20 08:32:01.317452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.616 [2024-11-20 08:32:01.317471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.616 [2024-11-20 08:32:01.317478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.616 [2024-11-20 08:32:01.324274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.616 [2024-11-20 08:32:01.324294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.616 [2024-11-20 08:32:01.324300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.616 [2024-11-20 08:32:01.329069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.616 [2024-11-20 08:32:01.329088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.616 [2024-11-20 08:32:01.329095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.616 [2024-11-20 08:32:01.333901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.616 [2024-11-20 08:32:01.333920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.616 [2024-11-20 08:32:01.333927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.616 [2024-11-20 08:32:01.338414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.616 [2024-11-20 08:32:01.338434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.616 [2024-11-20 08:32:01.338443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.878 [2024-11-20 08:32:01.344763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.878 [2024-11-20 08:32:01.344783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.878 [2024-11-20 08:32:01.344790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.878 [2024-11-20 08:32:01.352479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.878 [2024-11-20 08:32:01.352499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.878 [2024-11-20 08:32:01.352506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.878 [2024-11-20 08:32:01.359472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.878 [2024-11-20 08:32:01.359493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.878 [2024-11-20 08:32:01.359499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.878 [2024-11-20 08:32:01.364029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.878 [2024-11-20 08:32:01.364050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.878 [2024-11-20 08:32:01.364059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.878 [2024-11-20 08:32:01.369090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.878 [2024-11-20 08:32:01.369109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.878 [2024-11-20 08:32:01.369115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.878 [2024-11-20 08:32:01.373686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.878 [2024-11-20 08:32:01.373706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.878 [2024-11-20 08:32:01.373717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.878 [2024-11-20 08:32:01.377934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.878 [2024-11-20 08:32:01.377953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.878 [2024-11-20 08:32:01.377960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.878 [2024-11-20 08:32:01.382299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.878 [2024-11-20 08:32:01.382318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.878 [2024-11-20 08:32:01.382325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.878 [2024-11-20 08:32:01.386529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.878 [2024-11-20 08:32:01.386549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.878 [2024-11-20 08:32:01.386556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.878 [2024-11-20 08:32:01.394518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.878 [2024-11-20 08:32:01.394537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.878 [2024-11-20 08:32:01.394543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.878 [2024-11-20 08:32:01.401534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.878 [2024-11-20 08:32:01.401554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.401561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.406041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.406061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.406068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.413072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.413092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.413099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.419889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.419909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.419919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.426918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.426941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.426948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.435289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.435309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.435315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.443100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.443123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.443131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.450475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.450497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.450505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.457302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.457321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.457328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.465471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.465490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.465497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.472416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.472435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.472442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.479750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.479769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.479776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.485443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.485463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.485478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.491856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.491883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.491889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.497443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.497462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.497469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.503899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.503918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.503924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.510848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.510875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.510882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.520212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.520232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.520239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.527722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.527741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.527748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.535505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.535524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.535531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.541141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.541161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.541168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.547145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.547168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.547175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.552728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.552748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.552754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.558384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.558404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.558410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.567872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.567891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.567897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.575353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.575372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.575378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.581918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.879 [2024-11-20 08:32:01.581936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.879 [2024-11-20 08:32:01.581943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.879 [2024-11-20 08:32:01.591337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.880 [2024-11-20 08:32:01.591355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.880 [2024-11-20 08:32:01.591362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:56.880 [2024-11-20 08:32:01.597975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:56.880 [2024-11-20 08:32:01.597994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.880 [2024-11-20 08:32:01.598000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.141 [2024-11-20 08:32:01.604410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.141 [2024-11-20 08:32:01.604430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.141 [2024-11-20 08:32:01.604436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.141 [2024-11-20 08:32:01.611524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.141 [2024-11-20 08:32:01.611545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.141 [2024-11-20 08:32:01.611555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.141 [2024-11-20 08:32:01.621192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.141 [2024-11-20 08:32:01.621212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.141 [2024-11-20 08:32:01.621222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.141 [2024-11-20 08:32:01.631249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.141 [2024-11-20 08:32:01.631268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.141 [2024-11-20 08:32:01.631274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.141 [2024-11-20 08:32:01.638315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.141 [2024-11-20 08:32:01.638334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.141 [2024-11-20 08:32:01.638340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.141 [2024-11-20 08:32:01.645453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.141 [2024-11-20 08:32:01.645472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.141 [2024-11-20 08:32:01.645478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.141 [2024-11-20 08:32:01.653452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.141 [2024-11-20 08:32:01.653469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.141 [2024-11-20 08:32:01.653475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.141 [2024-11-20 08:32:01.661635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.141 [2024-11-20 08:32:01.661654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.141 [2024-11-20 08:32:01.661660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.141 [2024-11-20 08:32:01.668950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.141 [2024-11-20 08:32:01.668972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.141 [2024-11-20 08:32:01.668978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.141 [2024-11-20 08:32:01.676428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.141 [2024-11-20 08:32:01.676447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.141 [2024-11-20 08:32:01.676457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.141 [2024-11-20 08:32:01.685467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.141 [2024-11-20 08:32:01.685487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.685493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.691130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.691149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.691156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.695794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.695813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.695819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.704325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.704344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.704351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.709139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.709157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.709164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.713818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.713835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.713842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.718636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.718658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.718665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.729241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.729262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.729268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.734183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.734205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.734212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.738956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.738974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.738981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.747558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.747577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.747583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.752998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.753016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.753022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.755346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.755363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.755369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.759456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.759475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.759482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.763877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.763895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.763901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.768906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.768923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.768930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.774381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.774399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.774405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.781097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.781114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.781121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.788115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.788133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.788139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.794349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.794366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.794372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.799397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.799415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.799421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.804247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.804265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.804272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.808648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.808666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.808673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.812963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.812982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.812988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.817622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.817640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.817646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.824667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.824686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.824696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.831964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.142 [2024-11-20 08:32:01.831983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.142 [2024-11-20 08:32:01.831990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.142 [2024-11-20 08:32:01.840078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.143 [2024-11-20 08:32:01.840097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.143 [2024-11-20 08:32:01.840103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.143 [2024-11-20 08:32:01.847036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.143 [2024-11-20 08:32:01.847054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.143 [2024-11-20 08:32:01.847061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.143 [2024-11-20 08:32:01.853598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.143 [2024-11-20 08:32:01.853617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.143 [2024-11-20 08:32:01.853624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.143 [2024-11-20 08:32:01.861001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.143 [2024-11-20 08:32:01.861020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.143 [2024-11-20 08:32:01.861026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.868454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.868474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.868480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.875001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.875019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.875025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.883299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.883318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.883325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.890498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.890515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.890522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.895085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.895103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.895110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.899740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.899760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.899766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.908926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.908948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.908955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.913521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.913539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.913546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.918465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.918483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.918489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.927321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.927339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.927345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.933876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.933894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.933901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.938986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.939003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.939013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.947627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.947645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.947651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.952407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.952425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.952431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.957126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.957146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.957152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.961974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.961991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.961998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.967024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.967043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.967049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.975620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.975639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.975646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.986102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.986121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.986127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.992302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.992321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.992327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:01.999029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:01.999052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:01.999058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:02.005907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:02.005927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:02.005933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:02.013689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:02.013709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:02.013715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:02.019436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:02.019454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:02.019460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.406 [2024-11-20 08:32:02.024010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.406 [2024-11-20 08:32:02.024029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.406 [2024-11-20 08:32:02.024036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.028367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.028386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.028392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.032689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.032712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.032719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.036989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.037008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.037014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.041528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.041547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.041553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.045929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.045948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.045954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.050616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.050634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.050640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.055451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.055469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.055476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.060324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.060343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.060349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.065136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.065155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.065162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.069913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.069932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.069938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.075477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.075496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.075502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.407 4976.00 IOPS, 622.00 MiB/s [2024-11-20T07:32:02.136Z] [2024-11-20 08:32:02.081875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.081894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.081901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.087789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.087810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.087819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.091967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.091986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.091992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.096374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.096396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.096403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.100789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.100807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.100813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.105343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.105362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.105368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.109628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.109646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.109654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.114059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.114078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.114084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.118356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.118376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.118382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.123884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.123903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.123910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.407 [2024-11-20 08:32:02.129036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.407 [2024-11-20 08:32:02.129056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.407 [2024-11-20 08:32:02.129062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.670 [2024-11-20 08:32:02.134946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.670 [2024-11-20 08:32:02.134965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.670 [2024-11-20 08:32:02.134972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.670 [2024-11-20 08:32:02.143073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.670 [2024-11-20 08:32:02.143092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.670 [2024-11-20 08:32:02.143099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.670 [2024-11-20 08:32:02.148513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.670 [2024-11-20 08:32:02.148532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.670 [2024-11-20 08:32:02.148538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.670 [2024-11-20 08:32:02.152995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.670 [2024-11-20 08:32:02.153013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.670 [2024-11-20 08:32:02.153020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.670 [2024-11-20 08:32:02.163155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.670 [2024-11-20 08:32:02.163174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.670 [2024-11-20 08:32:02.163180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.670 [2024-11-20 08:32:02.168015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.670 [2024-11-20 08:32:02.168035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.670 [2024-11-20 08:32:02.168041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.173967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.173986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.173993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.180404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.180421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.180434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.187468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.187488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.187494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.194516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.194535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.194542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.201686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.201704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.201710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.207363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.207382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.207388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.212753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.212772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.212778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.219638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.219657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.219663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.226729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.226748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.226754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.231957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.231975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.231982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.238135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.238158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.238164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.242792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.242811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.242817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.247647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.247666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.247672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.251837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.251856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.251869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.255927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.255945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.255952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.263606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.263626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.263633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.270018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.270037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.270044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.274621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.274641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.274647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.279230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.279249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.279256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.283648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.283666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.283673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.288448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.288467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.288474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.297881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.297899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.297905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.302723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.302742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.302749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.308051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.308070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.308077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.313372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.313390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.671 [2024-11-20 08:32:02.313396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.671 [2024-11-20 08:32:02.320169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.671 [2024-11-20 08:32:02.320188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.672 [2024-11-20 08:32:02.320195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.672 [2024-11-20 08:32:02.325512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.672 [2024-11-20 08:32:02.325530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.672 [2024-11-20 08:32:02.325536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.672 [2024-11-20 08:32:02.332176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.672 [2024-11-20 08:32:02.332194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.672 [2024-11-20 08:32:02.332204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.672 [2024-11-20 08:32:02.338630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.672 [2024-11-20 08:32:02.338649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.672 [2024-11-20 08:32:02.338655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.672 [2024-11-20 08:32:02.344100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.672 [2024-11-20 08:32:02.344119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.672 [2024-11-20 08:32:02.344125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.672 [2024-11-20 08:32:02.350290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.672 [2024-11-20 08:32:02.350308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.672 [2024-11-20 08:32:02.350315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.672 [2024-11-20 08:32:02.358597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.672 [2024-11-20 08:32:02.358616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.672 [2024-11-20 08:32:02.358622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.672 [2024-11-20 08:32:02.366637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.672 [2024-11-20 08:32:02.366656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.672 [2024-11-20 08:32:02.366663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.672 [2024-11-20 08:32:02.374363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.672 [2024-11-20 08:32:02.374383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.672 [2024-11-20 08:32:02.374389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.672 [2024-11-20 08:32:02.381207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.672 [2024-11-20 08:32:02.381227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.672 [2024-11-20 08:32:02.381234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.672 [2024-11-20 08:32:02.385559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.672 [2024-11-20 08:32:02.385578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.672 [2024-11-20 08:32:02.385585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.672 [2024-11-20 08:32:02.389770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.672 [2024-11-20 08:32:02.389792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.672 [2024-11-20 08:32:02.389798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.672 [2024-11-20 08:32:02.394012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.672 [2024-11-20 08:32:02.394030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.672 [2024-11-20 08:32:02.394036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.398694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.398714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.398721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.403315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.403335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.403341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.412072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.412092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.412098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.416896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.416915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.416922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.421381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.421400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.421406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.425829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.425847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.425854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.429803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.429822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.429829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.438493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.438513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.438519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.443216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.443235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.443241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.447676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.447696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.447702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.453083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.453102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.453109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.458438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.458455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.458462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.466990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.467012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.467019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.471756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.471775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.471782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.476650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.476669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.476675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.481895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.481922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.481935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.488208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.488226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.488233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.494841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.494869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.494876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.500829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.500849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.500856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.505497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.505517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.505524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.510223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.510243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.510249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.514904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.514923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.936 [2024-11-20 08:32:02.514930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.936 [2024-11-20 08:32:02.522618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.936 [2024-11-20 08:32:02.522638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.522645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.528179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.528198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.528204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.533188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.533207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.533214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.537753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.537772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.537779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.542202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.542223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.542229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.546673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.546693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.546699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.552768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.552788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.552794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.559993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.560015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.560022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.567116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.567135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.567141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.571488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.571507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.571514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.575805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.575825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.575834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.580025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.580045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.580053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.588106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.588126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.588136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.594305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.594325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.594332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.600537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.600556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.600563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.605188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.605207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.605214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.609514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.609533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.609539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.613703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.613722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.613728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.617936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.617955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.617962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.622264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.622290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.622296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.626652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.626673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.626682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.631123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.631142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.631148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.641206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.641226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.641233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.646014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.646034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.646040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.650467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.650486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.650495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.937 [2024-11-20 08:32:02.658936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:57.937 [2024-11-20 08:32:02.658955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.937 [2024-11-20 08:32:02.658961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.200 [2024-11-20 08:32:02.662971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.200 [2024-11-20 08:32:02.662990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.200 [2024-11-20 08:32:02.662997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.200 [2024-11-20 08:32:02.669910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.200 [2024-11-20 08:32:02.669929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.200 [2024-11-20 08:32:02.669936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.200 [2024-11-20 08:32:02.675912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.200 [2024-11-20 08:32:02.675932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.200 [2024-11-20 08:32:02.675938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.200 [2024-11-20 08:32:02.681522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.681541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.681548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.686041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.686060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.686066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.690653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.690672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.690679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.695344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.695363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.695370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.699853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.699883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.699891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.704420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.704440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.704446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.708834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.708853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.708860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.713403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.713421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.713431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.718012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.718031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.718037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.722352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.722372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.722378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.726835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.726854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.726866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.731234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.731253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.731259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.739720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.739739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.739746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.746460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.746480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.746486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.751719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.751738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.751745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.759410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.759428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.759435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.766389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.766412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.766419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.771684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.771703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.771709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.779275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.779298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.779305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.787658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.787677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.787683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.795579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.795598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.795605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.800559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.800578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.800584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.805250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.805270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.805277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.809807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.809827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.809833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.814442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.814461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.814468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.819007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.201 [2024-11-20 08:32:02.819026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.201 [2024-11-20 08:32:02.819033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.201 [2024-11-20 08:32:02.824070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.824092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.824099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.202 [2024-11-20 08:32:02.832845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.832871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.832878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.202 [2024-11-20 08:32:02.837623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.837642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.837652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.202 [2024-11-20 08:32:02.842917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.842937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.842943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.202 [2024-11-20 08:32:02.850290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.850310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.850317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.202 [2024-11-20 08:32:02.857081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.857101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.857107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.202 [2024-11-20 08:32:02.865045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.865068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.865075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.202 [2024-11-20 08:32:02.870601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.870621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.870630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.202 [2024-11-20 08:32:02.879316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.879336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.879342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.202 [2024-11-20 08:32:02.885247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.885266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.885273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.202 [2024-11-20 08:32:02.892478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.892499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.892505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.202 [2024-11-20 08:32:02.899907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.899927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.899935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.202 [2024-11-20 08:32:02.907734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.907754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.907760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.202 [2024-11-20 08:32:02.914870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.914891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.914897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.202 [2024-11-20 08:32:02.919269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.919288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.919294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.202 [2024-11-20 08:32:02.923913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.202 [2024-11-20 08:32:02.923932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.202 [2024-11-20 08:32:02.923941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.465 [2024-11-20 08:32:02.928816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.465 [2024-11-20 08:32:02.928835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.465 [2024-11-20 08:32:02.928841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.465 [2024-11-20 08:32:02.935554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.465 [2024-11-20 08:32:02.935573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.465 [2024-11-20 08:32:02.935579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.465 [2024-11-20 08:32:02.942401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.465 [2024-11-20 08:32:02.942420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.465 [2024-11-20 08:32:02.942426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.465 [2024-11-20 08:32:02.949852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.465 [2024-11-20 08:32:02.949875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.465 [2024-11-20 08:32:02.949881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.465 [2024-11-20 08:32:02.956501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.465 [2024-11-20 08:32:02.956521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.465 [2024-11-20 08:32:02.956527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.465 [2024-11-20 08:32:02.962387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.465 [2024-11-20 08:32:02.962407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.465 [2024-11-20 08:32:02.962413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.465 [2024-11-20 08:32:02.968066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.465 [2024-11-20 08:32:02.968087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.465 [2024-11-20 08:32:02.968093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.465 [2024-11-20 08:32:02.973605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.465 [2024-11-20 08:32:02.973624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.465 [2024-11-20 08:32:02.973630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.465 [2024-11-20 08:32:02.979306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.465 [2024-11-20 08:32:02.979325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.465 [2024-11-20 08:32:02.979335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.465 [2024-11-20 08:32:02.985632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.465 [2024-11-20 08:32:02.985651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.465 [2024-11-20 08:32:02.985657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.465 [2024-11-20 08:32:02.990208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.465 [2024-11-20 08:32:02.990226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.465 [2024-11-20 08:32:02.990233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.465 [2024-11-20 08:32:03.000519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.465 [2024-11-20 08:32:03.000539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.465 [2024-11-20 08:32:03.000545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.465 [2024-11-20 08:32:03.006677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.465 [2024-11-20 08:32:03.006695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.465 [2024-11-20 08:32:03.006702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.465 [2024-11-20 08:32:03.013170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.465 [2024-11-20 08:32:03.013189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.465 [2024-11-20 08:32:03.013198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.465 [2024-11-20 08:32:03.020214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.465 [2024-11-20 08:32:03.020234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.466 [2024-11-20 08:32:03.020240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.466 [2024-11-20 08:32:03.030659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.466 [2024-11-20 08:32:03.030679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.466 [2024-11-20 08:32:03.030685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.466 [2024-11-20 08:32:03.037534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.466 [2024-11-20 08:32:03.037554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.466 [2024-11-20 08:32:03.037560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.466 [2024-11-20 08:32:03.044983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.466 [2024-11-20 08:32:03.045005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.466 [2024-11-20 08:32:03.045012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.466 [2024-11-20 08:32:03.050246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.466 [2024-11-20 08:32:03.050263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.466 [2024-11-20 08:32:03.050270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.466 [2024-11-20 08:32:03.056498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.466 [2024-11-20 08:32:03.056518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.466 [2024-11-20 08:32:03.056524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.466 [2024-11-20 08:32:03.062695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.466 [2024-11-20 08:32:03.062718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.466 [2024-11-20 08:32:03.062726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.466 [2024-11-20 08:32:03.070036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.466 [2024-11-20 08:32:03.070056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.466 [2024-11-20 08:32:03.070065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.466 [2024-11-20 08:32:03.076406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.466 [2024-11-20 08:32:03.076426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.466 [2024-11-20 08:32:03.076432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.466 5140.50 IOPS, 642.56 MiB/s [2024-11-20T07:32:03.195Z] [2024-11-20 08:32:03.082528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c28a0) 00:34:58.466 [2024-11-20 08:32:03.082548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.466 [2024-11-20 08:32:03.082554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.466 00:34:58.466 Latency(us) 00:34:58.466 [2024-11-20T07:32:03.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.466 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:58.466 nvme0n1 : 2.00 5142.12 642.77 0.00 0.00 3108.74 512.00 12888.75 00:34:58.466 [2024-11-20T07:32:03.195Z] =================================================================================================================== 00:34:58.466 [2024-11-20T07:32:03.195Z] Total : 5142.12 642.77 0.00 0.00 3108.74 512.00 12888.75 00:34:58.466 { 00:34:58.466 "results": [ 00:34:58.466 { 00:34:58.466 "job": "nvme0n1", 00:34:58.466 "core_mask": "0x2", 00:34:58.466 "workload": "randread", 00:34:58.466 "status": "finished", 00:34:58.466 "queue_depth": 16, 00:34:58.466 "io_size": 131072, 00:34:58.466 "runtime": 2.002481, 00:34:58.466 "iops": 5142.121198653071, 00:34:58.466 "mibps": 642.7651498316338, 00:34:58.466 "io_failed": 0, 00:34:58.466 "io_timeout": 0, 00:34:58.466 "avg_latency_us": 3108.735680942669, 00:34:58.466 "min_latency_us": 512.0, 00:34:58.466 "max_latency_us": 12888.746666666666 00:34:58.466 } 00:34:58.466 ], 00:34:58.466 "core_count": 1 00:34:58.466 } 00:34:58.466 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:58.466 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:58.466 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:58.466 | .driver_specific 00:34:58.466 | .nvme_error 00:34:58.466 | .status_code 00:34:58.466 | .command_transient_transport_error' 00:34:58.466 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 332 > 0 )) 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2198166 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2198166 ']' 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2198166 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2198166 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2198166' 00:34:58.729 killing process with pid 2198166 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2198166 00:34:58.729 Received shutdown signal, test time was about 2.000000 seconds 00:34:58.729 00:34:58.729 Latency(us) 00:34:58.729 [2024-11-20T07:32:03.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.729 [2024-11-20T07:32:03.458Z] =================================================================================================================== 00:34:58.729 [2024-11-20T07:32:03.458Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2198166 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2198850 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2198850 /var/tmp/bperf.sock 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2198850 ']' 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:58.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:58.729 08:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:58.990 [2024-11-20 08:32:03.459548] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:34:58.990 [2024-11-20 08:32:03.459606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2198850 ] 00:34:58.990 [2024-11-20 08:32:03.548439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.990 [2024-11-20 08:32:03.577165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.561 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:59.561 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:59.561 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:59.561 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:59.822 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:59.822 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.822 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:59.822 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.822 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:59.822 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:00.084 nvme0n1 00:35:00.346 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:00.346 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.346 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:00.346 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.346 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:00.346 08:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:00.346 Running I/O for 2 seconds... 00:35:00.346 [2024-11-20 08:32:04.947394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ee5c8 00:35:00.346 [2024-11-20 08:32:04.949267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.346 [2024-11-20 08:32:04.949294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:00.346 [2024-11-20 08:32:04.957833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fb048 00:35:00.346 [2024-11-20 08:32:04.959076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.346 [2024-11-20 08:32:04.959094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:00.346 [2024-11-20 08:32:04.969824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fb048 00:35:00.346 [2024-11-20 08:32:04.971056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.346 [2024-11-20 08:32:04.971073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:00.346 [2024-11-20 08:32:04.981782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fb048 00:35:00.346 [2024-11-20 08:32:04.982988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.346 [2024-11-20 08:32:04.983004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:00.346 [2024-11-20 08:32:04.992936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ecc78 00:35:00.346 [2024-11-20 08:32:04.994108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.346 [2024-11-20 08:32:04.994124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:00.346 [2024-11-20 08:32:05.005688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ebb98 00:35:00.346 [2024-11-20 08:32:05.006877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.346 [2024-11-20 08:32:05.006893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:00.346 [2024-11-20 08:32:05.017670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eaab8 00:35:00.346 [2024-11-20 08:32:05.018858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.346 [2024-11-20 08:32:05.018878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:00.346 [2024-11-20 08:32:05.031243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f9b30 00:35:00.346 [2024-11-20 08:32:05.033085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.346 [2024-11-20 08:32:05.033101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:00.346 [2024-11-20 08:32:05.041605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eaab8 00:35:00.346 [2024-11-20 08:32:05.042795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.346 [2024-11-20 08:32:05.042811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:00.346 [2024-11-20 08:32:05.053559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eaab8 00:35:00.346 [2024-11-20 08:32:05.054747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.346 [2024-11-20 08:32:05.054763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:00.346 [2024-11-20 08:32:05.065483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eaab8 00:35:00.346 [2024-11-20 08:32:05.066632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.346 [2024-11-20 08:32:05.066649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:00.609 [2024-11-20 08:32:05.077424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eaab8 00:35:00.609 [2024-11-20 08:32:05.078601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.609 [2024-11-20 08:32:05.078616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:00.609 [2024-11-20 08:32:05.089329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eaab8 00:35:00.609 [2024-11-20 08:32:05.090518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.609 [2024-11-20 08:32:05.090533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:00.609 [2024-11-20 08:32:05.101235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eaab8 00:35:00.609 [2024-11-20 08:32:05.102405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.609 [2024-11-20 08:32:05.102420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.113173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eaab8 00:35:00.610 [2024-11-20 08:32:05.114346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.114362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.124258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fa3a0 00:35:00.610 [2024-11-20 08:32:05.125424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.125440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.136940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fa3a0 00:35:00.610 [2024-11-20 08:32:05.138103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.138118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.148843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fa3a0 00:35:00.610 [2024-11-20 08:32:05.150021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.150037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.160755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fa3a0 00:35:00.610 [2024-11-20 08:32:05.161935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.161953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.172658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fa3a0 00:35:00.610 [2024-11-20 08:32:05.173834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.173850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.184544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fa3a0 00:35:00.610 [2024-11-20 08:32:05.185723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.185739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.196466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fa3a0 00:35:00.610 [2024-11-20 08:32:05.197640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.197656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.208377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fa3a0 00:35:00.610 [2024-11-20 08:32:05.209543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.209559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.220280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eaab8 00:35:00.610 [2024-11-20 08:32:05.221470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.221486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.231411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fa7d8 00:35:00.610 [2024-11-20 08:32:05.232558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.232573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.244110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f96f8 00:35:00.610 [2024-11-20 08:32:05.245265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.245281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.256043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f8618 00:35:00.610 [2024-11-20 08:32:05.257190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.257206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.267958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ea680 00:35:00.610 [2024-11-20 08:32:05.269104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.269121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.279866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f81e0 00:35:00.610 [2024-11-20 08:32:05.281024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.281040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.293359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f92c0 00:35:00.610 [2024-11-20 08:32:05.295167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.295182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.303710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e99d8 00:35:00.610 [2024-11-20 08:32:05.304881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.304897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.315659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e88f8 00:35:00.610 [2024-11-20 08:32:05.316847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.316864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:00.610 [2024-11-20 08:32:05.329076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e7818 00:35:00.610 [2024-11-20 08:32:05.330867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.610 [2024-11-20 08:32:05.330882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:00.873 [2024-11-20 08:32:05.339414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e6fa8 00:35:00.873 [2024-11-20 08:32:05.340585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.873 [2024-11-20 08:32:05.340601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:00.873 [2024-11-20 08:32:05.350535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f9f68 00:35:00.873 [2024-11-20 08:32:05.351671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.873 [2024-11-20 08:32:05.351686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:00.873 [2024-11-20 08:32:05.363198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f9f68 00:35:00.873 [2024-11-20 08:32:05.364341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.873 [2024-11-20 08:32:05.364357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:00.873 [2024-11-20 08:32:05.375093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f9f68 00:35:00.873 [2024-11-20 08:32:05.376235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.873 [2024-11-20 08:32:05.376250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:00.873 [2024-11-20 08:32:05.386973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f9f68 00:35:00.873 [2024-11-20 08:32:05.388095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.873 [2024-11-20 08:32:05.388110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:00.873 [2024-11-20 08:32:05.398869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f9f68 00:35:00.873 [2024-11-20 08:32:05.400021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.873 [2024-11-20 08:32:05.400037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:00.873 [2024-11-20 08:32:05.410755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e8088 00:35:00.873 [2024-11-20 08:32:05.411904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.873 [2024-11-20 08:32:05.411921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:00.873 [2024-11-20 08:32:05.422692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e9168 00:35:00.873 [2024-11-20 08:32:05.423838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.873 [2024-11-20 08:32:05.423854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:00.873 [2024-11-20 08:32:05.434604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f8a50 00:35:00.873 [2024-11-20 08:32:05.435723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.874 [2024-11-20 08:32:05.435739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:00.874 [2024-11-20 08:32:05.446568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e73e0 00:35:00.874 [2024-11-20 08:32:05.447726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.874 [2024-11-20 08:32:05.447742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:00.874 [2024-11-20 08:32:05.458486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fb048 00:35:00.874 [2024-11-20 08:32:05.459613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.874 [2024-11-20 08:32:05.459628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:00.874 [2024-11-20 08:32:05.470390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fb048 00:35:00.874 [2024-11-20 08:32:05.471524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.874 [2024-11-20 08:32:05.471543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:00.874 [2024-11-20 08:32:05.482304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fb048 00:35:00.874 [2024-11-20 08:32:05.483438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.874 [2024-11-20 08:32:05.483454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:00.874 [2024-11-20 08:32:05.494182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e95a0 00:35:00.874 [2024-11-20 08:32:05.495333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.874 [2024-11-20 08:32:05.495348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:00.874 [2024-11-20 08:32:05.505340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f8a50 00:35:00.874 [2024-11-20 08:32:05.506461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.874 [2024-11-20 08:32:05.506476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:00.874 [2024-11-20 08:32:05.518041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e7818 00:35:00.874 [2024-11-20 08:32:05.519177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.874 [2024-11-20 08:32:05.519193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:00.874 [2024-11-20 08:32:05.531493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fb048 00:35:00.874 [2024-11-20 08:32:05.533247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.874 [2024-11-20 08:32:05.533262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:00.874 [2024-11-20 08:32:05.541876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ea680 00:35:00.874 [2024-11-20 08:32:05.543014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.874 [2024-11-20 08:32:05.543030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:00.874 [2024-11-20 08:32:05.555315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f92c0 00:35:00.874 [2024-11-20 08:32:05.557089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.874 [2024-11-20 08:32:05.557104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:00.874 [2024-11-20 08:32:05.565711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e7818 00:35:00.874 [2024-11-20 08:32:05.566839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.874 [2024-11-20 08:32:05.566855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:00.874 [2024-11-20 08:32:05.579155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fb048 00:35:00.874 [2024-11-20 08:32:05.580918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.874 [2024-11-20 08:32:05.580933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:00.874 [2024-11-20 08:32:05.588750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f8a50 00:35:00.874 [2024-11-20 08:32:05.589863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.874 [2024-11-20 08:32:05.589878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.603039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f9b30 00:35:01.136 [2024-11-20 08:32:05.604796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.604811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.613822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e1b48 00:35:01.136 [2024-11-20 08:32:05.615107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.615123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.627454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e99d8 00:35:01.136 [2024-11-20 08:32:05.629373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.629388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.637038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ee5c8 00:35:01.136 [2024-11-20 08:32:05.638312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.638327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.649752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e1b48 00:35:01.136 [2024-11-20 08:32:05.651050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.651065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.660880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ea248 00:35:01.136 [2024-11-20 08:32:05.662157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.662173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.673584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fc128 00:35:01.136 [2024-11-20 08:32:05.674860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.674880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.687074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fd208 00:35:01.136 [2024-11-20 08:32:05.688994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.689009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.696666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eff18 00:35:01.136 [2024-11-20 08:32:05.697946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.697961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.709403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e23b8 00:35:01.136 [2024-11-20 08:32:05.710704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.710720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.721350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3498 00:35:01.136 [2024-11-20 08:32:05.722720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.722736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.734900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fd208 00:35:01.136 [2024-11-20 08:32:05.736818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.736833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.745253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e5658 00:35:01.136 [2024-11-20 08:32:05.746535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.746550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.757140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e5658 00:35:01.136 [2024-11-20 08:32:05.758421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.758437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.769031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e5658 00:35:01.136 [2024-11-20 08:32:05.770308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.770323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.780911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e5658 00:35:01.136 [2024-11-20 08:32:05.782207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.782225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.792851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e5658 00:35:01.136 [2024-11-20 08:32:05.794140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.794155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.804147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ef270 00:35:01.136 [2024-11-20 08:32:05.805414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.805429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.816846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3d08 00:35:01.136 [2024-11-20 08:32:05.818104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.818119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.827997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fc560 00:35:01.136 [2024-11-20 08:32:05.829259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.829274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.840699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fd640 00:35:01.136 [2024-11-20 08:32:05.841950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.841966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.136 [2024-11-20 08:32:05.852614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fe720 00:35:01.136 [2024-11-20 08:32:05.853879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.136 [2024-11-20 08:32:05.853895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.398 [2024-11-20 08:32:05.864552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e5ec8 00:35:01.398 [2024-11-20 08:32:05.865859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.398 [2024-11-20 08:32:05.865884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:01.398 [2024-11-20 08:32:05.875688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ebb98 00:35:01.398 [2024-11-20 08:32:05.876918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.398 [2024-11-20 08:32:05.876933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.398 [2024-11-20 08:32:05.888409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3d08 00:35:01.398 [2024-11-20 08:32:05.889698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.398 [2024-11-20 08:32:05.889713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:01.398 [2024-11-20 08:32:05.901891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fda78 00:35:01.398 [2024-11-20 08:32:05.903763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.398 [2024-11-20 08:32:05.903777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.398 [2024-11-20 08:32:05.911477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ebb98 00:35:01.398 [2024-11-20 08:32:05.912740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.398 [2024-11-20 08:32:05.912755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.398 [2024-11-20 08:32:05.926366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e9e10 00:35:01.398 21149.00 IOPS, 82.61 MiB/s [2024-11-20T07:32:06.127Z] [2024-11-20 08:32:05.928439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.398 [2024-11-20 08:32:05.928452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:01.398 [2024-11-20 08:32:05.935973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f57b0 00:35:01.398 [2024-11-20 08:32:05.937410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:05.937425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:01.399 [2024-11-20 08:32:05.948684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f4298 00:35:01.399 [2024-11-20 08:32:05.950146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:05.950161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.399 [2024-11-20 08:32:05.960582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f20d8 00:35:01.399 [2024-11-20 08:32:05.962025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:05.962040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.399 [2024-11-20 08:32:05.971710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ea680 00:35:01.399 [2024-11-20 08:32:05.973137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:05.973152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:01.399 [2024-11-20 08:32:05.984452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fbcf0 00:35:01.399 [2024-11-20 08:32:05.985912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:05.985928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.399 [2024-11-20 08:32:05.995594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f57b0 00:35:01.399 [2024-11-20 08:32:05.997022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:05.997037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:01.399 [2024-11-20 08:32:06.008295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f20d8 00:35:01.399 [2024-11-20 08:32:06.009743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:06.009759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.399 [2024-11-20 08:32:06.019417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ea680 00:35:01.399 [2024-11-20 08:32:06.020836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:06.020851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:01.399 [2024-11-20 08:32:06.032076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ea680 00:35:01.399 [2024-11-20 08:32:06.033499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:06.033514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.399 [2024-11-20 08:32:06.044054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ea680 00:35:01.399 [2024-11-20 08:32:06.045484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:06.045499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.399 [2024-11-20 08:32:06.055980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ea680 00:35:01.399 [2024-11-20 08:32:06.057406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:06.057421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.399 [2024-11-20 08:32:06.067895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ea680 00:35:01.399 [2024-11-20 08:32:06.069333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:06.069349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.399 [2024-11-20 08:32:06.079787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fc128 00:35:01.399 [2024-11-20 08:32:06.081240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:06.081255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.399 [2024-11-20 08:32:06.090946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f57b0 00:35:01.399 [2024-11-20 08:32:06.092355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:06.092373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:01.399 [2024-11-20 08:32:06.103657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e4578 00:35:01.399 [2024-11-20 08:32:06.105111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:06.105127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.399 [2024-11-20 08:32:06.115585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ea680 00:35:01.399 [2024-11-20 08:32:06.116999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.399 [2024-11-20 08:32:06.117015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.127525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fc998 00:35:01.661 [2024-11-20 08:32:06.128936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.128952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.138677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166de038 00:35:01.661 [2024-11-20 08:32:06.140093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.140108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.152956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f4f40 00:35:01.661 [2024-11-20 08:32:06.155017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.155032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.162552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eaef0 00:35:01.661 [2024-11-20 08:32:06.163934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.163950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.176849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e9e10 00:35:01.661 [2024-11-20 08:32:06.178913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.178928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.187259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f7100 00:35:01.661 [2024-11-20 08:32:06.188689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.188704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.198433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166de038 00:35:01.661 [2024-11-20 08:32:06.199847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.199865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.211113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166de038 00:35:01.661 [2024-11-20 08:32:06.212531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.212548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.223031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166de038 00:35:01.661 [2024-11-20 08:32:06.224456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.224472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.234127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f6020 00:35:01.661 [2024-11-20 08:32:06.235526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.235542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.248449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f7100 00:35:01.661 [2024-11-20 08:32:06.250488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.250504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.258046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f8618 00:35:01.661 [2024-11-20 08:32:06.259408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.259423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.270759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166de038 00:35:01.661 [2024-11-20 08:32:06.272191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.272207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.282671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f7100 00:35:01.661 [2024-11-20 08:32:06.284076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.284091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.294591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f8a50 00:35:01.661 [2024-11-20 08:32:06.296005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.296021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.306527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ea680 00:35:01.661 [2024-11-20 08:32:06.307954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.307970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.661 [2024-11-20 08:32:06.318487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eb760 00:35:01.661 [2024-11-20 08:32:06.319925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.661 [2024-11-20 08:32:06.319941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.662 [2024-11-20 08:32:06.329629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3498 00:35:01.662 [2024-11-20 08:32:06.331053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.662 [2024-11-20 08:32:06.331068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:01.662 [2024-11-20 08:32:06.342311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3498 00:35:01.662 [2024-11-20 08:32:06.343725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.662 [2024-11-20 08:32:06.343741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.662 [2024-11-20 08:32:06.354224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3498 00:35:01.662 [2024-11-20 08:32:06.355639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.662 [2024-11-20 08:32:06.355655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.662 [2024-11-20 08:32:06.366232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3498 00:35:01.662 [2024-11-20 08:32:06.367646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.662 [2024-11-20 08:32:06.367661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.662 [2024-11-20 08:32:06.378146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3498 00:35:01.662 [2024-11-20 08:32:06.379553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.662 [2024-11-20 08:32:06.379569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.925 [2024-11-20 08:32:06.390088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3498 00:35:01.925 [2024-11-20 08:32:06.391499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-11-20 08:32:06.391514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.925 [2024-11-20 08:32:06.401995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3498 00:35:01.925 [2024-11-20 08:32:06.403408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-11-20 08:32:06.403426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.925 [2024-11-20 08:32:06.413911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3498 00:35:01.925 [2024-11-20 08:32:06.415323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-11-20 08:32:06.415339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.925 [2024-11-20 08:32:06.425809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3498 00:35:01.925 [2024-11-20 08:32:06.427227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-11-20 08:32:06.427244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.925 [2024-11-20 08:32:06.437755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3498 00:35:01.925 [2024-11-20 08:32:06.439171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-11-20 08:32:06.439187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.925 [2024-11-20 08:32:06.449691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3498 00:35:01.925 [2024-11-20 08:32:06.451085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-11-20 08:32:06.451101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.925 [2024-11-20 08:32:06.461620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3498 00:35:01.925 [2024-11-20 08:32:06.463071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-11-20 08:32:06.463086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.925 [2024-11-20 08:32:06.475056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e3498 00:35:01.925 [2024-11-20 08:32:06.477101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-11-20 08:32:06.477117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.925 [2024-11-20 08:32:06.485453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f5be8 00:35:01.925 [2024-11-20 08:32:06.486884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-11-20 08:32:06.486901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:01.925 [2024-11-20 08:32:06.497458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ddc00 00:35:01.925 [2024-11-20 08:32:06.498871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-11-20 08:32:06.498887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:01.926 [2024-11-20 08:32:06.509431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e6300 00:35:01.926 [2024-11-20 08:32:06.510844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-11-20 08:32:06.510861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:01.926 [2024-11-20 08:32:06.522939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f3e60 00:35:01.926 [2024-11-20 08:32:06.524980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-11-20 08:32:06.524995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.926 [2024-11-20 08:32:06.532543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e2c28 00:35:01.926 [2024-11-20 08:32:06.533942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-11-20 08:32:06.533957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:01.926 [2024-11-20 08:32:06.545274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e6300 00:35:01.926 [2024-11-20 08:32:06.546688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-11-20 08:32:06.546704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:01.926 [2024-11-20 08:32:06.558752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f3e60 00:35:01.926 [2024-11-20 08:32:06.560793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-11-20 08:32:06.560809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.926 [2024-11-20 08:32:06.568388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e2c28 00:35:01.926 [2024-11-20 08:32:06.569785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-11-20 08:32:06.569801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:01.926 [2024-11-20 08:32:06.580278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fc128 00:35:01.926 [2024-11-20 08:32:06.581660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-11-20 08:32:06.581675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:01.926 [2024-11-20 08:32:06.593008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e0ea0 00:35:01.926 [2024-11-20 08:32:06.594414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-11-20 08:32:06.594430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:01.926 [2024-11-20 08:32:06.606483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e6300 00:35:01.926 [2024-11-20 08:32:06.608525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-11-20 08:32:06.608541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:01.926 [2024-11-20 08:32:06.616915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e2c28 00:35:01.926 [2024-11-20 08:32:06.618324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-11-20 08:32:06.618340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:01.926 [2024-11-20 08:32:06.628096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fc128 00:35:01.926 [2024-11-20 08:32:06.629476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-11-20 08:32:06.629491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:01.926 [2024-11-20 08:32:06.640850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f3e60 00:35:01.926 [2024-11-20 08:32:06.642269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-11-20 08:32:06.642284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.652830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e6300 00:35:02.188 [2024-11-20 08:32:06.654219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.654235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.664774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e23b8 00:35:02.188 [2024-11-20 08:32:06.666173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.666189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.675916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eaab8 00:35:02.188 [2024-11-20 08:32:06.677297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.677312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.686717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eee38 00:35:02.188 [2024-11-20 08:32:06.687635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.687650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.700397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166de470 00:35:02.188 [2024-11-20 08:32:06.701919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.701935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.710004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f9b30 00:35:02.188 [2024-11-20 08:32:06.710909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.710924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.724225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f9b30 00:35:02.188 [2024-11-20 08:32:06.725719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.725734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.733822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ea680 00:35:02.188 [2024-11-20 08:32:06.734721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.734736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.748112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e5220 00:35:02.188 [2024-11-20 08:32:06.749651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.749667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.757714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166fc998 00:35:02.188 [2024-11-20 08:32:06.758609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.758624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.770449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eea00 00:35:02.188 [2024-11-20 08:32:06.771380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.771397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.781636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166de470 00:35:02.188 [2024-11-20 08:32:06.782486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.782502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.794342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f0ff8 00:35:02.188 [2024-11-20 08:32:06.795255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.795270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.808009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e8d30 00:35:02.188 [2024-11-20 08:32:06.809549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.809565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.819386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e8088 00:35:02.188 [2024-11-20 08:32:06.820581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.820600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.830724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e6fa8 00:35:02.188 [2024-11-20 08:32:06.831856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.831874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.843821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e12d8 00:35:02.188 [2024-11-20 08:32:06.845143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.845158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.855735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e6300 00:35:02.188 [2024-11-20 08:32:06.857078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.857094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.867651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166ddc00 00:35:02.188 [2024-11-20 08:32:06.868990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.869006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.879626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f7100 00:35:02.188 [2024-11-20 08:32:06.881000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.881016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.890801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166eff18 00:35:02.188 [2024-11-20 08:32:06.892145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.892160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:02.188 [2024-11-20 08:32:06.903521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166f0350 00:35:02.188 [2024-11-20 08:32:06.904861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-11-20 08:32:06.904879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.450 [2024-11-20 08:32:06.915454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e1b48 00:35:02.450 [2024-11-20 08:32:06.916813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-11-20 08:32:06.916829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:02.450 [2024-11-20 08:32:06.926596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcd9c0) with pdu=0x2000166e6300 00:35:02.450 [2024-11-20 08:32:06.927926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-11-20 08:32:06.927942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:02.450 21279.50 IOPS, 83.12 MiB/s 00:35:02.450 Latency(us) 00:35:02.450 [2024-11-20T07:32:07.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.450 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:02.450 nvme0n1 : 2.01 21299.71 83.20 0.00 0.00 6001.43 2116.27 17694.72 00:35:02.450 [2024-11-20T07:32:07.179Z] =================================================================================================================== 00:35:02.450 [2024-11-20T07:32:07.179Z] Total : 21299.71 83.20 0.00 0.00 6001.43 2116.27 17694.72 00:35:02.450 { 00:35:02.450 "results": [ 00:35:02.450 { 00:35:02.450 "job": "nvme0n1", 00:35:02.450 "core_mask": "0x2", 00:35:02.450 "workload": "randwrite", 00:35:02.450 "status": "finished", 00:35:02.450 "queue_depth": 128, 00:35:02.450 "io_size": 4096, 00:35:02.450 "runtime": 2.007868, 00:35:02.450 "iops": 21299.706952847497, 00:35:02.450 "mibps": 83.20198028456053, 00:35:02.450 "io_failed": 0, 00:35:02.450 "io_timeout": 0, 00:35:02.450 "avg_latency_us": 6001.429503433333, 00:35:02.450 "min_latency_us": 2116.266666666667, 00:35:02.450 "max_latency_us": 17694.72 00:35:02.450 } 00:35:02.450 ], 00:35:02.450 "core_count": 1 00:35:02.450 } 00:35:02.450 08:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:02.450 08:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:02.450 08:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:02.450 | .driver_specific 00:35:02.450 | .nvme_error 00:35:02.450 | .status_code 00:35:02.450 | .command_transient_transport_error' 00:35:02.450 08:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:02.450 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:35:02.450 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2198850 00:35:02.450 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2198850 ']' 00:35:02.450 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2198850 00:35:02.450 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:02.450 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:02.450 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2198850 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2198850' 00:35:02.711 killing process with pid 2198850 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2198850 00:35:02.711 Received shutdown signal, test time was about 2.000000 seconds 00:35:02.711 00:35:02.711 Latency(us) 00:35:02.711 [2024-11-20T07:32:07.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.711 [2024-11-20T07:32:07.440Z] =================================================================================================================== 00:35:02.711 [2024-11-20T07:32:07.440Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2198850 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2199536 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2199536 /var/tmp/bperf.sock 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2199536 ']' 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:02.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:02.711 08:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:02.711 [2024-11-20 08:32:07.366277] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:35:02.711 [2024-11-20 08:32:07.366337] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2199536 ] 00:35:02.711 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:02.711 Zero copy mechanism will not be used. 00:35:02.972 [2024-11-20 08:32:07.456780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.972 [2024-11-20 08:32:07.486248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.541 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:03.541 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:03.541 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:03.541 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:03.802 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:03.802 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.802 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:03.802 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.802 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:03.802 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.064 nvme0n1 00:35:04.064 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:04.064 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.064 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:04.064 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.064 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:04.064 08:32:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:04.064 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:04.064 Zero copy mechanism will not be used. 00:35:04.064 Running I/O for 2 seconds... 00:35:04.064 [2024-11-20 08:32:08.706104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.064 [2024-11-20 08:32:08.706342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.064 [2024-11-20 08:32:08.706366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.064 [2024-11-20 08:32:08.715660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.064 [2024-11-20 08:32:08.715870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.064 [2024-11-20 08:32:08.715889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.064 [2024-11-20 08:32:08.720276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.064 [2024-11-20 08:32:08.720539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.064 [2024-11-20 08:32:08.720555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.064 [2024-11-20 08:32:08.728853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.064 [2024-11-20 08:32:08.729113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.064 [2024-11-20 08:32:08.729131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.064 [2024-11-20 08:32:08.735678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.064 [2024-11-20 08:32:08.735927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.064 [2024-11-20 08:32:08.735943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.064 [2024-11-20 08:32:08.743191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.064 [2024-11-20 08:32:08.743457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.064 [2024-11-20 08:32:08.743473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.064 [2024-11-20 08:32:08.751478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.064 [2024-11-20 08:32:08.751553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.064 [2024-11-20 08:32:08.751574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.064 [2024-11-20 08:32:08.757118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.064 [2024-11-20 08:32:08.757173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.064 [2024-11-20 08:32:08.757188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.064 [2024-11-20 08:32:08.763513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.064 [2024-11-20 08:32:08.763786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.064 [2024-11-20 08:32:08.763801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.064 [2024-11-20 08:32:08.772490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.064 [2024-11-20 08:32:08.772547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.064 [2024-11-20 08:32:08.772562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.064 [2024-11-20 08:32:08.778883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.064 [2024-11-20 08:32:08.778956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.064 [2024-11-20 08:32:08.778971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.064 [2024-11-20 08:32:08.787894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.064 [2024-11-20 08:32:08.788152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.064 [2024-11-20 08:32:08.788168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.327 [2024-11-20 08:32:08.794281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.327 [2024-11-20 08:32:08.794371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.327 [2024-11-20 08:32:08.794387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.327 [2024-11-20 08:32:08.800067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.327 [2024-11-20 08:32:08.800123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.327 [2024-11-20 08:32:08.800139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.327 [2024-11-20 08:32:08.804125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.327 [2024-11-20 08:32:08.804190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.327 [2024-11-20 08:32:08.804205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.327 [2024-11-20 08:32:08.807963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.327 [2024-11-20 08:32:08.808023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.327 [2024-11-20 08:32:08.808039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.327 [2024-11-20 08:32:08.812400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.327 [2024-11-20 08:32:08.812453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.327 [2024-11-20 08:32:08.812469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.327 [2024-11-20 08:32:08.816334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.327 [2024-11-20 08:32:08.816384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.327 [2024-11-20 08:32:08.816399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.327 [2024-11-20 08:32:08.821674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.327 [2024-11-20 08:32:08.821735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.327 [2024-11-20 08:32:08.821751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.327 [2024-11-20 08:32:08.825723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.327 [2024-11-20 08:32:08.825779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.327 [2024-11-20 08:32:08.825794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.327 [2024-11-20 08:32:08.829874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.327 [2024-11-20 08:32:08.829934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.327 [2024-11-20 08:32:08.829949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.327 [2024-11-20 08:32:08.834740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.327 [2024-11-20 08:32:08.834800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.834815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.841471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.841741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.841758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.846993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.847101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.847116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.855089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.855142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.855157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.859412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.859479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.859494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.863571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.863633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.863648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.867716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.867776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.867792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.874526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.874582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.874597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.883709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.883973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.883988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.889019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.889080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.889095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.893656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.893912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.893927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.899904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.899957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.899975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.904382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.904443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.904458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.908580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.908635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.908650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.912620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.912711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.912726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.921397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.921456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.921471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.927006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.927066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.927081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.930947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.931005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.931020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.934829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.934886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.934902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.942390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.942674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.942691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.948695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.948754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.948769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.952798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.952865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.952881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.959138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.328 [2024-11-20 08:32:08.959410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.328 [2024-11-20 08:32:08.959425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.328 [2024-11-20 08:32:08.963253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:08.963438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:08.963453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:08.966952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:08.967129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:08.967145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:08.970857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:08.971057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:08.971073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:08.974885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:08.975073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:08.975088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:08.978929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:08.979119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:08.979134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:08.983081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:08.983267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:08.983283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:08.987625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:08.987830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:08.987846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:08.992010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:08.992203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:08.992219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:08.995609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:08.995810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:08.995825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:08.999447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:08.999641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:08.999656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:09.005247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:09.005437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:09.005453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:09.009194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:09.009365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:09.009380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:09.013080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:09.013251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:09.013267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:09.021534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:09.021743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:09.021758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:09.028429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:09.028605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:09.028623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:09.036097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:09.036414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:09.036430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:09.042905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:09.043077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:09.043093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:09.046792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:09.046978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:09.046994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.329 [2024-11-20 08:32:09.050673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.329 [2024-11-20 08:32:09.050872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.329 [2024-11-20 08:32:09.050887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.592 [2024-11-20 08:32:09.056331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.592 [2024-11-20 08:32:09.056621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.592 [2024-11-20 08:32:09.056637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.592 [2024-11-20 08:32:09.065258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.592 [2024-11-20 08:32:09.065504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.592 [2024-11-20 08:32:09.065520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.592 [2024-11-20 08:32:09.070050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.592 [2024-11-20 08:32:09.070223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.592 [2024-11-20 08:32:09.070239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.592 [2024-11-20 08:32:09.075749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.592 [2024-11-20 08:32:09.075929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.592 [2024-11-20 08:32:09.075944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.592 [2024-11-20 08:32:09.083483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.592 [2024-11-20 08:32:09.083636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.083651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.087141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.087290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.087305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.093305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.093444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.093459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.096945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.097123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.097138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.104627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.104913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.104930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.110425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.110620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.110635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.114244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.114406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.114421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.117877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.118073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.118088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.124872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.125158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.125174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.130541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.130711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.130727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.134717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.134876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.134891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.140169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.140367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.140382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.148441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.148854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.148875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.152337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.152500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.152515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.160590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.160794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.160810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.167514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.167813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.167829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.174910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.175200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.175216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.180562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.180732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.180749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.184554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.184928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.184944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.191004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.191261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.191276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.195963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.196133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.196148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.202455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.202622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.202637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.206409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.206582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.206597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.213775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.214146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.214162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.219685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.219887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.219903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.226421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.226602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.593 [2024-11-20 08:32:09.226618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.593 [2024-11-20 08:32:09.235944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.593 [2024-11-20 08:32:09.236186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.594 [2024-11-20 08:32:09.236202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.594 [2024-11-20 08:32:09.246088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.594 [2024-11-20 08:32:09.246352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.594 [2024-11-20 08:32:09.246369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.594 [2024-11-20 08:32:09.254163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.594 [2024-11-20 08:32:09.254349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.594 [2024-11-20 08:32:09.254365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.594 [2024-11-20 08:32:09.262547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.594 [2024-11-20 08:32:09.262857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.594 [2024-11-20 08:32:09.262878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.594 [2024-11-20 08:32:09.269305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.594 [2024-11-20 08:32:09.269472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.594 [2024-11-20 08:32:09.269488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.594 [2024-11-20 08:32:09.277313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.594 [2024-11-20 08:32:09.277530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.594 [2024-11-20 08:32:09.277546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.594 [2024-11-20 08:32:09.286290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.594 [2024-11-20 08:32:09.286461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.594 [2024-11-20 08:32:09.286478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.594 [2024-11-20 08:32:09.295729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.594 [2024-11-20 08:32:09.295996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.594 [2024-11-20 08:32:09.296013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.594 [2024-11-20 08:32:09.303029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.594 [2024-11-20 08:32:09.303196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.594 [2024-11-20 08:32:09.303212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.594 [2024-11-20 08:32:09.310536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.594 [2024-11-20 08:32:09.310911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.594 [2024-11-20 08:32:09.310928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.855 [2024-11-20 08:32:09.319691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.855 [2024-11-20 08:32:09.320033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.855 [2024-11-20 08:32:09.320051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.855 [2024-11-20 08:32:09.328692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.855 [2024-11-20 08:32:09.328995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.855 [2024-11-20 08:32:09.329013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.855 [2024-11-20 08:32:09.337577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.855 [2024-11-20 08:32:09.337853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.855 [2024-11-20 08:32:09.337874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.855 [2024-11-20 08:32:09.346886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.855 [2024-11-20 08:32:09.347209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.855 [2024-11-20 08:32:09.347225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.855 [2024-11-20 08:32:09.356188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.855 [2024-11-20 08:32:09.356413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.855 [2024-11-20 08:32:09.356430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.855 [2024-11-20 08:32:09.365893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.855 [2024-11-20 08:32:09.366061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.366077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.376556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.376874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.376891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.387422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.387645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.387665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.396264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.396583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.396600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.404332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.404702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.404720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.411608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.411775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.411791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.420104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.420497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.420514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.429062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.429343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.429360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.437979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.438251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.438268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.447772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.448071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.448088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.456924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.457325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.457342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.468449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.468674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.468690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.478804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.479084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.479101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.488932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.489187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.489203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.499431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.499627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.499643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.510656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.510857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.510878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.521471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.521672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.521688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.531977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.532249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.532266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.542705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.542895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.542911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.553462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.553813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.553830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.564791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.565054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.565071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:04.856 [2024-11-20 08:32:09.575554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:04.856 [2024-11-20 08:32:09.575774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.856 [2024-11-20 08:32:09.575790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.587058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.587389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.587406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.597754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.598056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.598073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.607543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.607721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.607736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.617466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.617749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.617766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.626002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.626306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.626323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.632641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.632836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.632852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.640059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.640226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.640245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.647150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.647456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.647473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.653829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.654018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.654036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.663528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.663696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.663712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.670629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.670797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.670813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.675494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.675661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.675677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.681794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.682067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.682090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.689199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.689488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.689505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.698458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.698732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.698749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.118 4487.00 IOPS, 560.88 MiB/s [2024-11-20T07:32:09.847Z] [2024-11-20 08:32:09.709125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.709422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.709439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.718743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.718991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.719007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.729232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.729472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.118 [2024-11-20 08:32:09.729487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.118 [2024-11-20 08:32:09.739639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.118 [2024-11-20 08:32:09.739888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.119 [2024-11-20 08:32:09.739904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.119 [2024-11-20 08:32:09.749935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.119 [2024-11-20 08:32:09.750174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.119 [2024-11-20 08:32:09.750190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.119 [2024-11-20 08:32:09.759874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.119 [2024-11-20 08:32:09.760125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.119 [2024-11-20 08:32:09.760142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.119 [2024-11-20 08:32:09.770337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.119 [2024-11-20 08:32:09.770550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.119 [2024-11-20 08:32:09.770566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.119 [2024-11-20 08:32:09.779442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.119 [2024-11-20 08:32:09.779911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.119 [2024-11-20 08:32:09.779928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.119 [2024-11-20 08:32:09.787925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.119 [2024-11-20 08:32:09.788182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.119 [2024-11-20 08:32:09.788198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.119 [2024-11-20 08:32:09.798651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.119 [2024-11-20 08:32:09.798904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.119 [2024-11-20 08:32:09.798920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.119 [2024-11-20 08:32:09.808419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.119 [2024-11-20 08:32:09.808699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.119 [2024-11-20 08:32:09.808717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.119 [2024-11-20 08:32:09.818766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.119 [2024-11-20 08:32:09.819026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.119 [2024-11-20 08:32:09.819043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.119 [2024-11-20 08:32:09.829377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.119 [2024-11-20 08:32:09.829649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.119 [2024-11-20 08:32:09.829666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.119 [2024-11-20 08:32:09.839820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.119 [2024-11-20 08:32:09.840059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.119 [2024-11-20 08:32:09.840075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:09.850394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:09.850673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:09.850690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:09.861084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:09.861331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:09.861347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:09.871101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:09.871382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:09.871399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:09.881296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:09.881549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:09.881568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:09.892265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:09.892508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:09.892524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:09.902644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:09.902921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:09.902937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:09.913407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:09.913625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:09.913642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:09.923995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:09.924253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:09.924269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:09.934329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:09.934561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:09.934577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:09.945051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:09.945279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:09.945295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:09.955764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:09.956016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:09.956032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:09.966472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:09.966780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:09.966797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:09.976240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:09.976393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:09.976409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:09.986000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:09.986270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:09.986286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:09.996197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:09.996424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:09.996440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:10.007162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:10.007327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:10.007345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:10.017797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:10.018055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:10.018071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:10.028624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:10.028967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:10.028984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.382 [2024-11-20 08:32:10.038969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.382 [2024-11-20 08:32:10.039261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.382 [2024-11-20 08:32:10.039278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.383 [2024-11-20 08:32:10.045743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.383 [2024-11-20 08:32:10.045902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.383 [2024-11-20 08:32:10.045920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.383 [2024-11-20 08:32:10.049289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.383 [2024-11-20 08:32:10.049439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.383 [2024-11-20 08:32:10.049454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.383 [2024-11-20 08:32:10.052712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.383 [2024-11-20 08:32:10.052869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.383 [2024-11-20 08:32:10.052885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.383 [2024-11-20 08:32:10.056156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.383 [2024-11-20 08:32:10.056307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.383 [2024-11-20 08:32:10.056323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.383 [2024-11-20 08:32:10.059488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.383 [2024-11-20 08:32:10.059639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.383 [2024-11-20 08:32:10.059655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.383 [2024-11-20 08:32:10.063249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.383 [2024-11-20 08:32:10.063403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.383 [2024-11-20 08:32:10.063419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.383 [2024-11-20 08:32:10.066680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.383 [2024-11-20 08:32:10.066833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.383 [2024-11-20 08:32:10.066849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.383 [2024-11-20 08:32:10.070074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.383 [2024-11-20 08:32:10.070293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.383 [2024-11-20 08:32:10.070309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.383 [2024-11-20 08:32:10.073988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.383 [2024-11-20 08:32:10.074381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.383 [2024-11-20 08:32:10.074398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.383 [2024-11-20 08:32:10.083689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.383 [2024-11-20 08:32:10.083973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.383 [2024-11-20 08:32:10.083990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.383 [2024-11-20 08:32:10.093811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.383 [2024-11-20 08:32:10.094120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.383 [2024-11-20 08:32:10.094142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.383 [2024-11-20 08:32:10.104406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.383 [2024-11-20 08:32:10.104657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.383 [2024-11-20 08:32:10.104673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.646 [2024-11-20 08:32:10.115240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.646 [2024-11-20 08:32:10.115494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.646 [2024-11-20 08:32:10.115510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.646 [2024-11-20 08:32:10.125149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.646 [2024-11-20 08:32:10.125417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.646 [2024-11-20 08:32:10.125433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.646 [2024-11-20 08:32:10.132177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.646 [2024-11-20 08:32:10.132304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.646 [2024-11-20 08:32:10.132320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.646 [2024-11-20 08:32:10.139284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.646 [2024-11-20 08:32:10.139494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.646 [2024-11-20 08:32:10.139509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.646 [2024-11-20 08:32:10.146054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.646 [2024-11-20 08:32:10.146112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.646 [2024-11-20 08:32:10.146127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.646 [2024-11-20 08:32:10.153512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.646 [2024-11-20 08:32:10.153661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.646 [2024-11-20 08:32:10.153676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.646 [2024-11-20 08:32:10.160455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.646 [2024-11-20 08:32:10.160529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.646 [2024-11-20 08:32:10.160544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.646 [2024-11-20 08:32:10.166761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.646 [2024-11-20 08:32:10.166826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.646 [2024-11-20 08:32:10.166841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.174249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.174309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.174325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.180186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.180446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.180462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.187970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.188095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.188111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.195626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.195834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.195849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.202745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.202802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.202818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.208708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.208776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.208791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.215689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.215868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.215884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.221633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.221710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.221725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.228785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.228840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.228857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.236550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.236814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.236837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.243753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.244015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.244031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.250174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.250604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.250620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.258099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.258272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.258286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.264959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.265033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.265049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.270001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.270084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.270099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.273894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.273967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.273982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.278329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.278570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.278587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.283569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.283795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.283810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.291986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.292211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.292226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.300391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.300696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.300712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.305998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.306062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.306077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.313296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.313350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.313365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.321114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.321282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.321297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.327165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.327372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.327387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.332495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.332559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.332574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.340818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.341035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.341050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.647 [2024-11-20 08:32:10.348156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.647 [2024-11-20 08:32:10.348229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.647 [2024-11-20 08:32:10.348245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.648 [2024-11-20 08:32:10.354998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.648 [2024-11-20 08:32:10.355073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.648 [2024-11-20 08:32:10.355088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.648 [2024-11-20 08:32:10.362123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.648 [2024-11-20 08:32:10.362190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.648 [2024-11-20 08:32:10.362206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.648 [2024-11-20 08:32:10.369499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.648 [2024-11-20 08:32:10.369638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.648 [2024-11-20 08:32:10.369653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.379142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.379218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.379232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.383873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.384248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.384264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.392677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.392741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.392757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.398039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.398328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.398345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.406148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.406417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.406433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.413691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.413756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.413771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.421561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.421632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.421647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.427808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.427920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.427936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.435636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.435698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.435713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.440005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.440061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.440076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.444787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.445044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.445060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.451646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.451700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.451715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.457218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.457456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.457474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.463668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.463889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.463904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.472517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.472772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.472787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.477285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.477345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.477360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.483515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.483571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.483586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.489026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.489119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.489134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.494749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.494806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.494821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.500794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.500954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.500969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.506419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.506726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.506742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.514018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.514074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.514089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.517576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.517626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.517641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.522753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.911 [2024-11-20 08:32:10.523013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.911 [2024-11-20 08:32:10.523029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.911 [2024-11-20 08:32:10.530480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.530702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.530717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.912 [2024-11-20 08:32:10.535588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.535642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.535657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.912 [2024-11-20 08:32:10.539029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.539086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.539101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.912 [2024-11-20 08:32:10.542453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.542512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.542527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.912 [2024-11-20 08:32:10.545892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.545981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.545996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.912 [2024-11-20 08:32:10.549449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.549502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.549517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.912 [2024-11-20 08:32:10.553731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.553801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.553816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.912 [2024-11-20 08:32:10.559110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.559345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.559361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.912 [2024-11-20 08:32:10.567847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.568139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.568155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.912 [2024-11-20 08:32:10.575137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.575229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.575244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.912 [2024-11-20 08:32:10.581525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.581824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.581840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.912 [2024-11-20 08:32:10.590883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.590991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.591006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.912 [2024-11-20 08:32:10.600964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.601213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.601228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.912 [2024-11-20 08:32:10.609929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.610136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.610151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.912 [2024-11-20 08:32:10.620414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.620643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.620661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.912 [2024-11-20 08:32:10.631410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:05.912 [2024-11-20 08:32:10.631686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.912 [2024-11-20 08:32:10.631702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.174 [2024-11-20 08:32:10.642260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:06.174 [2024-11-20 08:32:10.642419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.174 [2024-11-20 08:32:10.642434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.174 [2024-11-20 08:32:10.651104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:06.174 [2024-11-20 08:32:10.651192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.174 [2024-11-20 08:32:10.651207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.174 [2024-11-20 08:32:10.656555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:06.174 [2024-11-20 08:32:10.656810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.174 [2024-11-20 08:32:10.656826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.174 [2024-11-20 08:32:10.664288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:06.174 [2024-11-20 08:32:10.664543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.174 [2024-11-20 08:32:10.664559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.174 [2024-11-20 08:32:10.672237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:06.174 [2024-11-20 08:32:10.672465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.174 [2024-11-20 08:32:10.672480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.174 [2024-11-20 08:32:10.681306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:06.174 [2024-11-20 08:32:10.681422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.174 [2024-11-20 08:32:10.681437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.174 [2024-11-20 08:32:10.689560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:06.174 [2024-11-20 08:32:10.689821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.174 [2024-11-20 08:32:10.689836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.174 [2024-11-20 08:32:10.697606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:06.174 [2024-11-20 08:32:10.697694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.174 [2024-11-20 08:32:10.697709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.175 [2024-11-20 08:32:10.706286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fcdd00) with pdu=0x2000166ff3c8 00:35:06.175 [2024-11-20 08:32:10.706531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.175 [2024-11-20 08:32:10.706546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.175 4283.00 IOPS, 535.38 MiB/s 00:35:06.175 Latency(us) 00:35:06.175 [2024-11-20T07:32:10.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.175 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:06.175 nvme0n1 : 2.00 4280.41 535.05 0.00 0.00 3731.28 1576.96 12069.55 00:35:06.175 [2024-11-20T07:32:10.904Z] =================================================================================================================== 00:35:06.175 [2024-11-20T07:32:10.904Z] Total : 4280.41 535.05 0.00 0.00 3731.28 1576.96 12069.55 00:35:06.175 { 00:35:06.175 "results": [ 00:35:06.175 { 00:35:06.175 "job": "nvme0n1", 00:35:06.175 "core_mask": "0x2", 00:35:06.175 "workload": "randwrite", 00:35:06.175 "status": "finished", 00:35:06.175 "queue_depth": 16, 00:35:06.175 "io_size": 131072, 00:35:06.175 "runtime": 2.00495, 00:35:06.175 "iops": 4280.405995161974, 00:35:06.175 "mibps": 535.0507493952467, 00:35:06.175 "io_failed": 0, 00:35:06.175 "io_timeout": 0, 00:35:06.175 "avg_latency_us": 3731.2820321603353, 00:35:06.175 "min_latency_us": 1576.96, 00:35:06.175 "max_latency_us": 12069.546666666667 00:35:06.175 } 00:35:06.175 ], 00:35:06.175 "core_count": 1 00:35:06.175 } 00:35:06.175 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:06.175 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:06.175 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:06.175 | .driver_specific 00:35:06.175 | .nvme_error 00:35:06.175 | .status_code 00:35:06.175 | .command_transient_transport_error' 00:35:06.175 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:06.435 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 277 > 0 )) 00:35:06.435 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2199536 00:35:06.435 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2199536 ']' 00:35:06.435 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2199536 00:35:06.435 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:06.435 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:06.435 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2199536 00:35:06.435 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:06.435 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:06.436 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2199536' 00:35:06.436 killing process with pid 2199536 00:35:06.436 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2199536 00:35:06.436 Received shutdown signal, test time was about 2.000000 seconds 00:35:06.436 00:35:06.436 Latency(us) 00:35:06.436 [2024-11-20T07:32:11.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.436 [2024-11-20T07:32:11.165Z] =================================================================================================================== 00:35:06.436 [2024-11-20T07:32:11.165Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:06.436 08:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2199536 00:35:06.436 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2197138 00:35:06.436 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2197138 ']' 00:35:06.436 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2197138 00:35:06.436 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:06.436 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:06.436 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2197138 00:35:06.436 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:06.436 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:06.436 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2197138' 00:35:06.436 killing process with pid 2197138 00:35:06.436 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2197138 00:35:06.436 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2197138 00:35:06.698 00:35:06.698 real 0m16.306s 00:35:06.698 user 0m32.221s 00:35:06.698 sys 0m3.568s 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:06.698 ************************************ 00:35:06.698 END TEST nvmf_digest_error 00:35:06.698 ************************************ 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@99 -- # sync 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@102 -- # set +e 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:06.698 rmmod nvme_tcp 00:35:06.698 rmmod nvme_fabrics 00:35:06.698 rmmod nvme_keyring 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@106 -- # set -e 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@107 -- # return 0 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # '[' -n 2197138 ']' 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@337 -- # killprocess 2197138 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2197138 ']' 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2197138 00:35:06.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2197138) - No such process 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2197138 is not found' 00:35:06.698 Process with pid 2197138 is not found 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # nvmf_fini 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@254 -- # local dev 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@257 -- # remove_target_ns 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:06.698 08:32:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@258 -- # delete_main_bridge 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # return 0 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # _dev=0 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # dev_map=() 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@274 -- # iptr 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-save 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-restore 00:35:09.244 00:35:09.244 real 0m43.928s 00:35:09.244 user 1m7.958s 00:35:09.244 sys 0m13.273s 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:09.244 ************************************ 00:35:09.244 END TEST nvmf_digest 00:35:09.244 ************************************ 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.244 ************************************ 00:35:09.244 START TEST nvmf_host_discovery 00:35:09.244 ************************************ 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:09.244 * Looking for test storage... 00:35:09.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:09.244 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:09.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.244 --rc genhtml_branch_coverage=1 00:35:09.244 --rc genhtml_function_coverage=1 00:35:09.244 --rc genhtml_legend=1 00:35:09.244 --rc geninfo_all_blocks=1 00:35:09.244 --rc geninfo_unexecuted_blocks=1 00:35:09.244 00:35:09.244 ' 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:09.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.245 --rc genhtml_branch_coverage=1 00:35:09.245 --rc genhtml_function_coverage=1 00:35:09.245 --rc genhtml_legend=1 00:35:09.245 --rc geninfo_all_blocks=1 00:35:09.245 --rc geninfo_unexecuted_blocks=1 00:35:09.245 00:35:09.245 ' 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:09.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.245 --rc genhtml_branch_coverage=1 00:35:09.245 --rc genhtml_function_coverage=1 00:35:09.245 --rc genhtml_legend=1 00:35:09.245 --rc geninfo_all_blocks=1 00:35:09.245 --rc geninfo_unexecuted_blocks=1 00:35:09.245 00:35:09.245 ' 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:09.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.245 --rc genhtml_branch_coverage=1 00:35:09.245 --rc genhtml_function_coverage=1 00:35:09.245 --rc genhtml_legend=1 00:35:09.245 --rc geninfo_all_blocks=1 00:35:09.245 --rc geninfo_unexecuted_blocks=1 00:35:09.245 00:35:09.245 ' 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@50 -- # : 0 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:35:09.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # DISCOVERY_PORT=8009 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@15 -- # NQN=nqn.2016-06.io.spdk:cnode 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@18 -- # HOST_SOCK=/tmp/host.sock 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # nvmftestinit 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:35:09.245 08:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # e810=() 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # x722=() 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # mlx=() 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:17.394 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:17.395 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:17.395 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:17.395 Found net devices under 0000:31:00.0: cvl_0_0 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:17.395 Found net devices under 0000:31:00.1: cvl_0_1 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@247 -- # create_target_ns 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:17.395 10.0.0.1 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:17.395 10.0.0.2 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:35:17.395 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:35:17.396 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:17.396 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:35:17.396 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:35:17.396 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:17.396 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:17.396 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.396 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.396 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:17.396 08:32:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:17.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:17.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.507 ms 00:35:17.396 00:35:17.396 --- 10.0.0.1 ping statistics --- 00:35:17.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.396 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:35:17.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:17.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:35:17.396 00:35:17.396 --- 10.0.0.2 ping statistics --- 00:35:17.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.396 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # return 0 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:17.396 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # return 1 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev= 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@160 -- # return 0 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # return 1 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev= 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@160 -- # return 0 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmfappstart -m 0x2 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # nvmfpid=2204940 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # waitforlisten 2204940 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2204940 ']' 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.659 08:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:17.659 [2024-11-20 08:32:22.261137] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:35:17.659 [2024-11-20 08:32:22.261210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:17.659 [2024-11-20 08:32:22.371445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.921 [2024-11-20 08:32:22.419125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:17.921 [2024-11-20 08:32:22.419170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:17.921 [2024-11-20 08:32:22.419179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:17.921 [2024-11-20 08:32:22.419186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:17.921 [2024-11-20 08:32:22.419191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:17.921 [2024-11-20 08:32:22.419901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.494 [2024-11-20 08:32:23.119882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.494 [2024-11-20 08:32:23.132173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # rpc_cmd bdev_null_create null0 1000 512 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.494 null0 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@31 -- # rpc_cmd bdev_null_create null1 1000 512 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.494 null1 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd bdev_wait_for_examine 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@40 -- # hostpid=2205281 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@41 -- # waitforlisten 2205281 /tmp/host.sock 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2205281 ']' 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:18.494 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:18.494 08:32:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.756 [2024-11-20 08:32:23.227967] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:35:18.756 [2024-11-20 08:32:23.228035] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2205281 ] 00:35:18.756 [2024-11-20 08:32:23.310700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.756 [2024-11-20 08:32:23.352559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.326 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.326 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:35:19.326 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@43 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:19.326 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:35:19.326 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.326 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.326 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.326 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:35:19.326 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.326 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.326 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.326 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # notify_id=0 00:35:19.327 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@78 -- # get_subsystem_names 00:35:19.327 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:19.327 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:35:19.327 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:35:19.327 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:35:19.327 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.327 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.587 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.587 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # get_bdev_list 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@82 -- # get_subsystem_names 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_bdev_list 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # get_subsystem_names 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_bdev_list 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:35:19.588 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.848 [2024-11-20 08:32:24.339177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_subsystem_names 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@93 -- # get_bdev_list 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@94 -- # is_notification_count_eq 0 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=0 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@100 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:35:19.848 08:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:35:20.419 [2024-11-20 08:32:25.048104] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:20.419 [2024-11-20 08:32:25.048123] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:20.419 [2024-11-20 08:32:25.048137] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:20.419 [2024-11-20 08:32:25.136403] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:20.680 [2024-11-20 08:32:25.360674] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:35:20.680 [2024-11-20 08:32:25.361517] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x10bc650:1 started. 00:35:20.680 [2024-11-20 08:32:25.363133] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:20.680 [2024-11-20 08:32:25.363151] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:20.941 [2024-11-20 08:32:25.407517] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x10bc650 was disconnected and freed. delete nvme_qpair. 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@101 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.941 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:35:21.201 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:21.201 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@102 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:21.201 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:21.201 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:21.201 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:21.201 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # is_notification_count_eq 1 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=1 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=1 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=1 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:35:21.202 [2024-11-20 08:32:25.785274] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x10c97f0:1 started. 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:35:21.202 [2024-11-20 08:32:25.788393] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x10c97f0 was disconnected and freed. delete nvme_qpair. 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@109 -- # is_notification_count_eq 1 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=1 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=1 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.202 [2024-11-20 08:32:25.887675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:21.202 [2024-11-20 08:32:25.888482] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:21.202 [2024-11-20 08:32:25.888506] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@115 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.202 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@116 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.462 [2024-11-20 08:32:25.976773] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@117 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:21.462 08:32:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:21.462 08:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:35:21.462 08:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:21.462 08:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:21.462 08:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.462 08:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:35:21.462 08:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.462 08:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:35:21.462 08:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.462 08:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:35:21.462 08:32:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:35:21.462 [2024-11-20 08:32:26.042660] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:35:21.462 [2024-11-20 08:32:26.042702] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:21.462 [2024-11-20 08:32:26.042711] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:21.462 [2024-11-20 08:32:26.042721] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # is_notification_count_eq 0 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.436 [2024-11-20 08:32:27.143674] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:22.436 [2024-11-20 08:32:27.143698] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:22.436 [2024-11-20 08:32:27.145034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:22.436 [2024-11-20 08:32:27.145053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:22.436 [2024-11-20 08:32:27.145063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:22.436 [2024-11-20 08:32:27.145071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:22.436 [2024-11-20 08:32:27.145079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:22.436 [2024-11-20 08:32:27.145086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:22.436 [2024-11-20 08:32:27.145094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:22.436 [2024-11-20 08:32:27.145101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:22.436 [2024-11-20 08:32:27.145109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108cd90 is same with the state(6) to be set 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@124 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.436 [2024-11-20 08:32:27.155047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108cd90 (9): Bad file descriptor 00:35:22.436 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:35:22.698 [2024-11-20 08:32:27.165082] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:22.698 [2024-11-20 08:32:27.165096] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:22.698 [2024-11-20 08:32:27.165102] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:22.698 [2024-11-20 08:32:27.165108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:22.698 [2024-11-20 08:32:27.165125] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:22.698 [2024-11-20 08:32:27.165463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.698 [2024-11-20 08:32:27.165477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108cd90 with addr=10.0.0.2, port=4420 00:35:22.698 [2024-11-20 08:32:27.165486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108cd90 is same with the state(6) to be set 00:35:22.698 [2024-11-20 08:32:27.165502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108cd90 (9): Bad file descriptor 00:35:22.698 [2024-11-20 08:32:27.165521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:22.698 [2024-11-20 08:32:27.165528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:22.698 [2024-11-20 08:32:27.165537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:22.698 [2024-11-20 08:32:27.165544] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:22.698 [2024-11-20 08:32:27.165550] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:22.699 [2024-11-20 08:32:27.165554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:22.699 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.699 [2024-11-20 08:32:27.175157] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:22.699 [2024-11-20 08:32:27.175169] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:22.699 [2024-11-20 08:32:27.175174] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:22.699 [2024-11-20 08:32:27.175178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:22.699 [2024-11-20 08:32:27.175193] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:22.699 [2024-11-20 08:32:27.175473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.699 [2024-11-20 08:32:27.175485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108cd90 with addr=10.0.0.2, port=4420 00:35:22.699 [2024-11-20 08:32:27.175492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108cd90 is same with the state(6) to be set 00:35:22.699 [2024-11-20 08:32:27.175504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108cd90 (9): Bad file descriptor 00:35:22.699 [2024-11-20 08:32:27.175514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:22.699 [2024-11-20 08:32:27.175521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:22.699 [2024-11-20 08:32:27.175528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:22.699 [2024-11-20 08:32:27.175534] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:22.699 [2024-11-20 08:32:27.175539] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:22.699 [2024-11-20 08:32:27.175543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:22.699 [2024-11-20 08:32:27.185224] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:22.699 [2024-11-20 08:32:27.185235] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:22.699 [2024-11-20 08:32:27.185240] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:22.699 [2024-11-20 08:32:27.185245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:22.699 [2024-11-20 08:32:27.185258] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:22.699 [2024-11-20 08:32:27.185540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.699 [2024-11-20 08:32:27.185558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108cd90 with addr=10.0.0.2, port=4420 00:35:22.699 [2024-11-20 08:32:27.185565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108cd90 is same with the state(6) to be set 00:35:22.699 [2024-11-20 08:32:27.185576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108cd90 (9): Bad file descriptor 00:35:22.699 [2024-11-20 08:32:27.185586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:22.699 [2024-11-20 08:32:27.185592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:22.699 [2024-11-20 08:32:27.185599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:22.699 [2024-11-20 08:32:27.185605] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:22.699 [2024-11-20 08:32:27.185610] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:22.699 [2024-11-20 08:32:27.185614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:22.699 [2024-11-20 08:32:27.195290] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:22.699 [2024-11-20 08:32:27.195306] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:22.699 [2024-11-20 08:32:27.195311] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:22.699 [2024-11-20 08:32:27.195316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:22.699 [2024-11-20 08:32:27.195332] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:22.699 [2024-11-20 08:32:27.195613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.699 [2024-11-20 08:32:27.195626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108cd90 with addr=10.0.0.2, port=4420 00:35:22.699 [2024-11-20 08:32:27.195633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108cd90 is same with the state(6) to be set 00:35:22.699 [2024-11-20 08:32:27.195644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108cd90 (9): Bad file descriptor 00:35:22.699 [2024-11-20 08:32:27.195662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:22.699 [2024-11-20 08:32:27.195668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:22.699 [2024-11-20 08:32:27.195675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:22.699 [2024-11-20 08:32:27.195681] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:22.699 [2024-11-20 08:32:27.195686] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:22.699 [2024-11-20 08:32:27.195690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:22.699 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.699 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:22.699 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@125 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:22.699 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:22.699 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:22.699 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:22.699 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:22.699 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:35:22.699 [2024-11-20 08:32:27.205361] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:22.699 [2024-11-20 08:32:27.205374] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:22.699 [2024-11-20 08:32:27.205379] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:22.699 [2024-11-20 08:32:27.205384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:22.699 [2024-11-20 08:32:27.205398] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:22.699 [2024-11-20 08:32:27.205678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.699 [2024-11-20 08:32:27.205689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108cd90 with addr=10.0.0.2, port=4420 00:35:22.699 [2024-11-20 08:32:27.205697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108cd90 is same with the state(6) to be set 00:35:22.699 [2024-11-20 08:32:27.205707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108cd90 (9): Bad file descriptor 00:35:22.699 [2024-11-20 08:32:27.205725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:22.699 [2024-11-20 08:32:27.205732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:22.699 [2024-11-20 08:32:27.205739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:22.699 [2024-11-20 08:32:27.205745] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:22.699 [2024-11-20 08:32:27.205750] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:22.699 [2024-11-20 08:32:27.205755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:22.699 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:35:22.699 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:22.699 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:35:22.699 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.699 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:35:22.699 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.699 [2024-11-20 08:32:27.215429] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:22.699 [2024-11-20 08:32:27.215444] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:22.699 [2024-11-20 08:32:27.215448] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:22.699 [2024-11-20 08:32:27.215453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:22.699 [2024-11-20 08:32:27.215468] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:22.699 [2024-11-20 08:32:27.215749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.699 [2024-11-20 08:32:27.215761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108cd90 with addr=10.0.0.2, port=4420 00:35:22.699 [2024-11-20 08:32:27.215768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108cd90 is same with the state(6) to be set 00:35:22.699 [2024-11-20 08:32:27.215783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108cd90 (9): Bad file descriptor 00:35:22.699 [2024-11-20 08:32:27.215801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:22.699 [2024-11-20 08:32:27.215808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:22.699 [2024-11-20 08:32:27.215815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:22.700 [2024-11-20 08:32:27.215821] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:22.700 [2024-11-20 08:32:27.215826] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:22.700 [2024-11-20 08:32:27.215830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:22.700 [2024-11-20 08:32:27.225500] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:22.700 [2024-11-20 08:32:27.225511] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:22.700 [2024-11-20 08:32:27.225516] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:22.700 [2024-11-20 08:32:27.225521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:22.700 [2024-11-20 08:32:27.225535] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:22.700 [2024-11-20 08:32:27.225786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.700 [2024-11-20 08:32:27.225797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108cd90 with addr=10.0.0.2, port=4420 00:35:22.700 [2024-11-20 08:32:27.225804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108cd90 is same with the state(6) to be set 00:35:22.700 [2024-11-20 08:32:27.225816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108cd90 (9): Bad file descriptor 00:35:22.700 [2024-11-20 08:32:27.225833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:22.700 [2024-11-20 08:32:27.225840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:22.700 [2024-11-20 08:32:27.225847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:22.700 [2024-11-20 08:32:27.225853] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:22.700 [2024-11-20 08:32:27.225858] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:22.700 [2024-11-20 08:32:27.225867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:22.700 [2024-11-20 08:32:27.231234] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:35:22.700 [2024-11-20 08:32:27.231252] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@126 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # is_notification_count_eq 0 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.700 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@133 -- # is_notification_count_eq 2 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=2 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=2 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=4 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.960 08:32:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.900 [2024-11-20 08:32:28.547013] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:23.900 [2024-11-20 08:32:28.547030] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:23.900 [2024-11-20 08:32:28.547043] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:24.159 [2024-11-20 08:32:28.634320] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:35:24.420 [2024-11-20 08:32:28.901533] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:35:24.420 [2024-11-20 08:32:28.902332] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x108a3d0:1 started. 00:35:24.420 [2024-11-20 08:32:28.904165] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:24.420 [2024-11-20 08:32:28.904193] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:24.420 [2024-11-20 08:32:28.906554] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x108a3d0 was disconnected and freed. delete nvme_qpair. 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:24.420 request: 00:35:24.420 { 00:35:24.420 "name": "nvme", 00:35:24.420 "trtype": "tcp", 00:35:24.420 "traddr": "10.0.0.2", 00:35:24.420 "adrfam": "ipv4", 00:35:24.420 "trsvcid": "8009", 00:35:24.420 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:24.420 "wait_for_attach": true, 00:35:24.420 "method": "bdev_nvme_start_discovery", 00:35:24.420 "req_id": 1 00:35:24.420 } 00:35:24.420 Got JSON-RPC error response 00:35:24.420 response: 00:35:24.420 { 00:35:24.420 "code": -17, 00:35:24.420 "message": "File exists" 00:35:24.420 } 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@140 -- # get_discovery_ctrlrs 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@140 -- # [[ nvme == \n\v\m\e ]] 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # get_bdev_list 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:24.420 08:32:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:24.420 request: 00:35:24.420 { 00:35:24.420 "name": "nvme_second", 00:35:24.420 "trtype": "tcp", 00:35:24.420 "traddr": "10.0.0.2", 00:35:24.420 "adrfam": "ipv4", 00:35:24.420 "trsvcid": "8009", 00:35:24.420 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:24.420 "wait_for_attach": true, 00:35:24.420 "method": "bdev_nvme_start_discovery", 00:35:24.420 "req_id": 1 00:35:24.420 } 00:35:24.420 Got JSON-RPC error response 00:35:24.420 response: 00:35:24.420 { 00:35:24.420 "code": -17, 00:35:24.420 "message": "File exists" 00:35:24.420 } 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:24.420 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@147 -- # get_bdev_list 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.421 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:24.680 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:24.680 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:35:24.680 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:24.680 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:24.680 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:24.680 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:24.680 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:24.680 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:24.680 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.680 08:32:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:25.621 [2024-11-20 08:32:30.159658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.621 [2024-11-20 08:32:30.159698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108a200 with addr=10.0.0.2, port=8010 00:35:25.621 [2024-11-20 08:32:30.159714] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:25.621 [2024-11-20 08:32:30.159727] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:25.621 [2024-11-20 08:32:30.159735] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:26.563 [2024-11-20 08:32:31.161851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.563 [2024-11-20 08:32:31.161884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108a200 with addr=10.0.0.2, port=8010 00:35:26.563 [2024-11-20 08:32:31.161896] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:26.563 [2024-11-20 08:32:31.161904] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:26.563 [2024-11-20 08:32:31.161911] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:27.504 [2024-11-20 08:32:32.163973] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:35:27.504 request: 00:35:27.504 { 00:35:27.504 "name": "nvme_second", 00:35:27.504 "trtype": "tcp", 00:35:27.504 "traddr": "10.0.0.2", 00:35:27.504 "adrfam": "ipv4", 00:35:27.504 "trsvcid": "8010", 00:35:27.504 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:27.504 "wait_for_attach": false, 00:35:27.505 "attach_timeout_ms": 3000, 00:35:27.505 "method": "bdev_nvme_start_discovery", 00:35:27.505 "req_id": 1 00:35:27.505 } 00:35:27.505 Got JSON-RPC error response 00:35:27.505 response: 00:35:27.505 { 00:35:27.505 "code": -110, 00:35:27.505 "message": "Connection timed out" 00:35:27.505 } 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@154 -- # trap - SIGINT SIGTERM EXIT 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@156 -- # kill 2205281 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # nvmftestfini 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@99 -- # sync 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@102 -- # set +e 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:27.505 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:27.765 rmmod nvme_tcp 00:35:27.765 rmmod nvme_fabrics 00:35:27.765 rmmod nvme_keyring 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@106 -- # set -e 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@107 -- # return 0 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # '[' -n 2204940 ']' 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@337 -- # killprocess 2204940 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2204940 ']' 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2204940 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2204940 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2204940' 00:35:27.765 killing process with pid 2204940 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2204940 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2204940 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@254 -- # local dev 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:27.765 08:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:30.312 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:35:30.312 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # return 0 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@274 -- # iptr 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-save 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:35:30.313 00:35:30.313 real 0m20.988s 00:35:30.313 user 0m23.429s 00:35:30.313 sys 0m7.715s 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:30.313 ************************************ 00:35:30.313 END TEST nvmf_host_discovery 00:35:30.313 ************************************ 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@34 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.313 ************************************ 00:35:30.313 START TEST nvmf_discovery_remove_ifc 00:35:30.313 ************************************ 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:30.313 * Looking for test storage... 00:35:30.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:30.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.313 --rc genhtml_branch_coverage=1 00:35:30.313 --rc genhtml_function_coverage=1 00:35:30.313 --rc genhtml_legend=1 00:35:30.313 --rc geninfo_all_blocks=1 00:35:30.313 --rc geninfo_unexecuted_blocks=1 00:35:30.313 00:35:30.313 ' 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:30.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.313 --rc genhtml_branch_coverage=1 00:35:30.313 --rc genhtml_function_coverage=1 00:35:30.313 --rc genhtml_legend=1 00:35:30.313 --rc geninfo_all_blocks=1 00:35:30.313 --rc geninfo_unexecuted_blocks=1 00:35:30.313 00:35:30.313 ' 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:30.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.313 --rc genhtml_branch_coverage=1 00:35:30.313 --rc genhtml_function_coverage=1 00:35:30.313 --rc genhtml_legend=1 00:35:30.313 --rc geninfo_all_blocks=1 00:35:30.313 --rc geninfo_unexecuted_blocks=1 00:35:30.313 00:35:30.313 ' 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:30.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.313 --rc genhtml_branch_coverage=1 00:35:30.313 --rc genhtml_function_coverage=1 00:35:30.313 --rc genhtml_legend=1 00:35:30.313 --rc geninfo_all_blocks=1 00:35:30.313 --rc geninfo_unexecuted_blocks=1 00:35:30.313 00:35:30.313 ' 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:30.313 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:35:30.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # discovery_port=8009 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@18 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@21 -- # host_sock=/tmp/host.sock 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # nvmftestinit 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # remove_target_ns 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # xtrace_disable 00:35:30.314 08:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # pci_devs=() 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # net_devs=() 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # e810=() 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # local -ga e810 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # x722=() 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # local -ga x722 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # mlx=() 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # local -ga mlx 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:38.462 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:38.462 08:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:38.462 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:38.462 Found net devices under 0000:31:00.0: cvl_0_0 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:38.462 Found net devices under 0000:31:00.1: cvl_0_1 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # is_hw=yes 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@247 -- # create_target_ns 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:35:38.462 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:38.463 10.0.0.1 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:38.463 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:38.752 10.0.0.2 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:38.752 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:38.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:38.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.513 ms 00:35:38.752 00:35:38.752 --- 10.0.0.1 ping statistics --- 00:35:38.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.753 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:35:38.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:38.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:35:38.753 00:35:38.753 --- 10.0.0.2 ping statistics --- 00:35:38.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.753 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # return 0 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # return 1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev= 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@160 -- # return 0 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # return 1 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev= 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@160 -- # return 0 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:38.753 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:39.085 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@35 -- # nvmfappstart -m 0x2 00:35:39.085 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:39.085 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:39.085 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:39.085 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # nvmfpid=2211839 00:35:39.085 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # waitforlisten 2211839 00:35:39.085 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:39.085 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2211839 ']' 00:35:39.085 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:39.085 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:39.085 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:39.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:39.085 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:39.085 08:32:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:39.085 [2024-11-20 08:32:43.552926] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:35:39.085 [2024-11-20 08:32:43.552996] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.086 [2024-11-20 08:32:43.660957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.086 [2024-11-20 08:32:43.710245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:39.086 [2024-11-20 08:32:43.710297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:39.086 [2024-11-20 08:32:43.710305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:39.086 [2024-11-20 08:32:43.710312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:39.086 [2024-11-20 08:32:43.710319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:39.086 [2024-11-20 08:32:43.711113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:39.662 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:39.662 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:39.662 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:39.662 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:39.662 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:39.924 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:39.924 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@38 -- # rpc_cmd 00:35:39.924 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.924 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:39.924 [2024-11-20 08:32:44.436086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.924 [2024-11-20 08:32:44.444355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:39.924 null0 00:35:39.924 [2024-11-20 08:32:44.476299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:39.924 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.924 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@54 -- # hostpid=2212076 00:35:39.924 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@55 -- # waitforlisten 2212076 /tmp/host.sock 00:35:39.924 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:39.924 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2212076 ']' 00:35:39.924 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:35:39.924 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:39.924 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:39.924 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:39.924 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:39.924 08:32:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:39.924 [2024-11-20 08:32:44.554205] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:35:39.924 [2024-11-20 08:32:44.554268] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2212076 ] 00:35:39.924 [2024-11-20 08:32:44.636715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.185 [2024-11-20 08:32:44.679789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.755 08:32:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:40.755 08:32:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:40.755 08:32:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@57 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:40.755 08:32:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:40.755 08:32:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.755 08:32:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:40.755 08:32:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.755 08:32:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@61 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:40.755 08:32:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.755 08:32:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:40.755 08:32:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.755 08:32:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:40.755 08:32:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.755 08:32:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:42.139 [2024-11-20 08:32:46.445924] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:42.139 [2024-11-20 08:32:46.445945] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:42.139 [2024-11-20 08:32:46.445959] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:42.139 [2024-11-20 08:32:46.532233] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:42.140 [2024-11-20 08:32:46.758520] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:35:42.140 [2024-11-20 08:32:46.759525] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x20c9670:1 started. 00:35:42.140 [2024-11-20 08:32:46.761124] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:42.140 [2024-11-20 08:32:46.761165] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:42.140 [2024-11-20 08:32:46.761187] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:42.140 [2024-11-20 08:32:46.761201] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:42.140 [2024-11-20 08:32:46.761222] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:42.140 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.140 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@67 -- # wait_for_bdev nvme0n1 00:35:42.140 [2024-11-20 08:32:46.763833] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x20c9670 was disconnected and freed. delete nvme_qpair. 00:35:42.140 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:35:42.140 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:42.140 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:35:42.140 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.140 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:35:42.140 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:42.140 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:35:42.140 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.140 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:42.140 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@70 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_1 00:35:42.140 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@71 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 down 00:35:42.401 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@74 -- # wait_for_bdev '' 00:35:42.401 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:35:42.401 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:42.401 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:35:42.401 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:35:42.401 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:35:42.401 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.402 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:42.402 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.402 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:35:42.402 08:32:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:35:43.344 08:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:35:43.344 08:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:35:43.344 08:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:43.344 08:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.344 08:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:35:43.344 08:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.344 08:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:35:43.344 08:32:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.344 08:32:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:35:43.344 08:32:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:35:44.730 08:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:35:44.730 08:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:44.730 08:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:35:44.730 08:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:35:44.730 08:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:35:44.730 08:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.730 08:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:44.730 08:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.730 08:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:35:44.730 08:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:35:45.672 08:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:35:45.672 08:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:35:45.672 08:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:45.672 08:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.673 08:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:35:45.673 08:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:45.673 08:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:35:45.673 08:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.673 08:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:35:45.673 08:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:35:46.614 08:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:35:46.614 08:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:35:46.614 08:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:46.614 08:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:35:46.614 08:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.614 08:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:46.614 08:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:35:46.614 08:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.614 08:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:35:46.614 08:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:35:47.558 08:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:35:47.558 08:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:47.558 08:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:35:47.558 08:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.558 08:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:35:47.558 08:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:47.558 08:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:35:47.558 [2024-11-20 08:32:52.201807] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:47.558 [2024-11-20 08:32:52.201853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:47.558 [2024-11-20 08:32:52.201871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.558 [2024-11-20 08:32:52.201882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:47.558 [2024-11-20 08:32:52.201889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.558 [2024-11-20 08:32:52.201898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:47.558 [2024-11-20 08:32:52.201905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.558 [2024-11-20 08:32:52.201913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:47.558 [2024-11-20 08:32:52.201921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.558 [2024-11-20 08:32:52.201929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:47.558 [2024-11-20 08:32:52.201936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.558 [2024-11-20 08:32:52.201944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6050 is same with the state(6) to be set 00:35:47.558 08:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.558 [2024-11-20 08:32:52.211829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6050 (9): Bad file descriptor 00:35:47.558 [2024-11-20 08:32:52.221868] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:47.558 [2024-11-20 08:32:52.221881] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:47.558 [2024-11-20 08:32:52.221886] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:47.558 [2024-11-20 08:32:52.221891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:47.558 [2024-11-20 08:32:52.221913] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:47.558 08:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:35:47.558 08:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:35:48.943 08:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:35:48.943 08:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:48.943 08:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:35:48.943 08:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.943 08:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:35:48.943 08:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:48.943 08:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:35:48.943 [2024-11-20 08:32:53.275901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:48.943 [2024-11-20 08:32:53.275943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6050 with addr=10.0.0.2, port=4420 00:35:48.943 [2024-11-20 08:32:53.275957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6050 is same with the state(6) to be set 00:35:48.943 [2024-11-20 08:32:53.275985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6050 (9): Bad file descriptor 00:35:48.943 [2024-11-20 08:32:53.276365] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:48.943 [2024-11-20 08:32:53.276389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:48.943 [2024-11-20 08:32:53.276398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:48.943 [2024-11-20 08:32:53.276407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:48.943 [2024-11-20 08:32:53.276415] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:48.943 [2024-11-20 08:32:53.276421] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:48.943 [2024-11-20 08:32:53.276426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:48.943 [2024-11-20 08:32:53.276435] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:48.943 [2024-11-20 08:32:53.276440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:48.943 08:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.943 08:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:35:48.943 08:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:35:49.886 [2024-11-20 08:32:54.278813] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:49.886 [2024-11-20 08:32:54.278837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:49.886 [2024-11-20 08:32:54.278848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:49.886 [2024-11-20 08:32:54.278856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:49.886 [2024-11-20 08:32:54.278868] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:49.886 [2024-11-20 08:32:54.278875] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:49.886 [2024-11-20 08:32:54.278880] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:49.886 [2024-11-20 08:32:54.278884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:49.886 [2024-11-20 08:32:54.278907] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:49.886 [2024-11-20 08:32:54.278930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:49.886 [2024-11-20 08:32:54.278940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.886 [2024-11-20 08:32:54.278951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:49.886 [2024-11-20 08:32:54.278958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.886 [2024-11-20 08:32:54.278966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:49.886 [2024-11-20 08:32:54.278974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.886 [2024-11-20 08:32:54.278982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:49.886 [2024-11-20 08:32:54.278989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.886 [2024-11-20 08:32:54.278997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:49.886 [2024-11-20 08:32:54.279004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.886 [2024-11-20 08:32:54.279012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:49.886 [2024-11-20 08:32:54.279285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095380 (9): Bad file descriptor 00:35:49.886 [2024-11-20 08:32:54.280298] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:49.886 [2024-11-20 08:32:54.280310] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != '' ]] 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@77 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@78 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@81 -- # wait_for_bdev nvme1n1 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:49.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:35:50.828 08:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:35:50.828 08:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:35:50.828 08:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:50.828 08:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:35:50.828 08:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.828 08:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:35:50.828 08:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:50.828 08:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.087 08:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:51.087 08:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:35:51.658 [2024-11-20 08:32:56.340013] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:51.658 [2024-11-20 08:32:56.340033] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:51.658 [2024-11-20 08:32:56.340047] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:51.919 [2024-11-20 08:32:56.468453] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:35:51.919 [2024-11-20 08:32:56.568256] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:51.919 [2024-11-20 08:32:56.569028] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x20a1450:1 started. 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:51.919 [2024-11-20 08:32:56.570257] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:51.919 [2024-11-20 08:32:56.570289] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:51.919 [2024-11-20 08:32:56.570314] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:51.919 [2024-11-20 08:32:56.570328] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:51.919 [2024-11-20 08:32:56.570336] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:35:51.919 [2024-11-20 08:32:56.578124] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x20a1450 was disconnected and freed. delete nvme_qpair. 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@85 -- # killprocess 2212076 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2212076 ']' 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2212076 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:51.919 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2212076 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2212076' 00:35:52.180 killing process with pid 2212076 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2212076 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2212076 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # nvmftestfini 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:52.180 rmmod nvme_tcp 00:35:52.180 rmmod nvme_fabrics 00:35:52.180 rmmod nvme_keyring 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # '[' -n 2211839 ']' 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # killprocess 2211839 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2211839 ']' 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2211839 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.180 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2211839 00:35:52.441 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:52.441 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:52.441 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2211839' 00:35:52.441 killing process with pid 2211839 00:35:52.441 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2211839 00:35:52.441 08:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2211839 00:35:52.441 08:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:52.441 08:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # nvmf_fini 00:35:52.441 08:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@254 -- # local dev 00:35:52.441 08:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@257 -- # remove_target_ns 00:35:52.441 08:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:52.441 08:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:52.441 08:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@258 -- # delete_main_bridge 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # return 0 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@274 -- # iptr 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-save 00:35:54.987 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-restore 00:35:54.988 00:35:54.988 real 0m24.509s 00:35:54.988 user 0m27.727s 00:35:54.988 sys 0m7.888s 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:54.988 ************************************ 00:35:54.988 END TEST nvmf_discovery_remove_ifc 00:35:54.988 ************************************ 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@35 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.988 ************************************ 00:35:54.988 START TEST nvmf_multicontroller 00:35:54.988 ************************************ 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:35:54.988 * Looking for test storage... 00:35:54.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:54.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.988 --rc genhtml_branch_coverage=1 00:35:54.988 --rc genhtml_function_coverage=1 00:35:54.988 --rc genhtml_legend=1 00:35:54.988 --rc geninfo_all_blocks=1 00:35:54.988 --rc geninfo_unexecuted_blocks=1 00:35:54.988 00:35:54.988 ' 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:54.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.988 --rc genhtml_branch_coverage=1 00:35:54.988 --rc genhtml_function_coverage=1 00:35:54.988 --rc genhtml_legend=1 00:35:54.988 --rc geninfo_all_blocks=1 00:35:54.988 --rc geninfo_unexecuted_blocks=1 00:35:54.988 00:35:54.988 ' 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:54.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.988 --rc genhtml_branch_coverage=1 00:35:54.988 --rc genhtml_function_coverage=1 00:35:54.988 --rc genhtml_legend=1 00:35:54.988 --rc geninfo_all_blocks=1 00:35:54.988 --rc geninfo_unexecuted_blocks=1 00:35:54.988 00:35:54.988 ' 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:54.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.988 --rc genhtml_branch_coverage=1 00:35:54.988 --rc genhtml_function_coverage=1 00:35:54.988 --rc genhtml_legend=1 00:35:54.988 --rc geninfo_all_blocks=1 00:35:54.988 --rc geninfo_unexecuted_blocks=1 00:35:54.988 00:35:54.988 ' 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.988 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@50 -- # : 0 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:35:54.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # nvmftestinit 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # remove_target_ns 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # xtrace_disable 00:35:54.989 08:32:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # pci_devs=() 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # local -a pci_devs 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # pci_net_devs=() 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # pci_drivers=() 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # local -A pci_drivers 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # net_devs=() 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # local -ga net_devs 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # e810=() 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # local -ga e810 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # x722=() 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # local -ga x722 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # mlx=() 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # local -ga mlx 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:03.137 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:03.137 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:03.137 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:03.138 Found net devices under 0000:31:00.0: cvl_0_0 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:03.138 Found net devices under 0000:31:00.1: cvl_0_1 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # is_hw=yes 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@247 -- # create_target_ns 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@28 -- # local -g _dev 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # ips=() 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772161 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:36:03.138 10.0.0.1 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772162 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:36:03.138 10.0.0.2 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:36:03.138 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:03.139 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:03.139 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:36:03.139 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@38 -- # ping_ips 1 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:03.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:03.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.478 ms 00:36:03.401 00:36:03.401 --- 10.0.0.1 ping statistics --- 00:36:03.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.401 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target0 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:03.401 08:33:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:36:03.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:03.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:36:03.401 00:36:03.401 --- 10.0.0.2 ping statistics --- 00:36:03.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.401 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair++ )) 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # return 0 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:03.401 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator1 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # return 1 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev= 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@160 -- # return 0 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target0 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target1 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target1 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # return 1 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev= 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@160 -- # return 0 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:03.402 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:03.663 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # nvmfappstart -m 0xE 00:36:03.663 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:03.663 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:03.663 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:03.663 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # nvmfpid=2219424 00:36:03.663 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # waitforlisten 2219424 00:36:03.663 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:03.663 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2219424 ']' 00:36:03.663 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.663 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:03.663 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.663 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:03.663 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:03.663 [2024-11-20 08:33:08.184268] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:36:03.663 [2024-11-20 08:33:08.184318] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.663 [2024-11-20 08:33:08.285476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:03.663 [2024-11-20 08:33:08.324393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.663 [2024-11-20 08:33:08.324427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.663 [2024-11-20 08:33:08.324435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:03.663 [2024-11-20 08:33:08.324442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.663 [2024-11-20 08:33:08.324448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.663 [2024-11-20 08:33:08.326047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:03.663 [2024-11-20 08:33:08.326203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:03.663 [2024-11-20 08:33:08.326204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:04.606 08:33:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:04.606 [2024-11-20 08:33:09.048231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:04.606 Malloc0 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:04.606 [2024-11-20 08:33:09.112924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:04.606 [2024-11-20 08:33:09.124836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:04.606 Malloc1 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@32 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@39 -- # bdevperf_pid=2219670 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@42 -- # waitforlisten 2219670 /var/tmp/bdevperf.sock 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2219670 ']' 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:04.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:04.606 08:33:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:05.549 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:05.549 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:36:05.549 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@45 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:36:05.549 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.549 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:05.811 NVMe0n1 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@49 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@49 -- # grep -c NVMe 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.811 1 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@55 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:05.811 request: 00:36:05.811 { 00:36:05.811 "name": "NVMe0", 00:36:05.811 "trtype": "tcp", 00:36:05.811 "traddr": "10.0.0.2", 00:36:05.811 "adrfam": "ipv4", 00:36:05.811 "trsvcid": "4420", 00:36:05.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:05.811 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:36:05.811 "hostaddr": "10.0.0.1", 00:36:05.811 "prchk_reftag": false, 00:36:05.811 "prchk_guard": false, 00:36:05.811 "hdgst": false, 00:36:05.811 "ddgst": false, 00:36:05.811 "allow_unrecognized_csi": false, 00:36:05.811 "method": "bdev_nvme_attach_controller", 00:36:05.811 "req_id": 1 00:36:05.811 } 00:36:05.811 Got JSON-RPC error response 00:36:05.811 response: 00:36:05.811 { 00:36:05.811 "code": -114, 00:36:05.811 "message": "A controller named NVMe0 already exists with the specified network path" 00:36:05.811 } 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:05.811 request: 00:36:05.811 { 00:36:05.811 "name": "NVMe0", 00:36:05.811 "trtype": "tcp", 00:36:05.811 "traddr": "10.0.0.2", 00:36:05.811 "adrfam": "ipv4", 00:36:05.811 "trsvcid": "4420", 00:36:05.811 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:05.811 "hostaddr": "10.0.0.1", 00:36:05.811 "prchk_reftag": false, 00:36:05.811 "prchk_guard": false, 00:36:05.811 "hdgst": false, 00:36:05.811 "ddgst": false, 00:36:05.811 "allow_unrecognized_csi": false, 00:36:05.811 "method": "bdev_nvme_attach_controller", 00:36:05.811 "req_id": 1 00:36:05.811 } 00:36:05.811 Got JSON-RPC error response 00:36:05.811 response: 00:36:05.811 { 00:36:05.811 "code": -114, 00:36:05.811 "message": "A controller named NVMe0 already exists with the specified network path" 00:36:05.811 } 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@64 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:05.811 request: 00:36:05.811 { 00:36:05.811 "name": "NVMe0", 00:36:05.811 "trtype": "tcp", 00:36:05.811 "traddr": "10.0.0.2", 00:36:05.811 "adrfam": "ipv4", 00:36:05.811 "trsvcid": "4420", 00:36:05.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:05.811 "hostaddr": "10.0.0.1", 00:36:05.811 "prchk_reftag": false, 00:36:05.811 "prchk_guard": false, 00:36:05.811 "hdgst": false, 00:36:05.811 "ddgst": false, 00:36:05.811 "multipath": "disable", 00:36:05.811 "allow_unrecognized_csi": false, 00:36:05.811 "method": "bdev_nvme_attach_controller", 00:36:05.811 "req_id": 1 00:36:05.811 } 00:36:05.811 Got JSON-RPC error response 00:36:05.811 response: 00:36:05.811 { 00:36:05.811 "code": -114, 00:36:05.811 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:36:05.811 } 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.811 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:05.811 request: 00:36:05.811 { 00:36:05.811 "name": "NVMe0", 00:36:05.811 "trtype": "tcp", 00:36:05.812 "traddr": "10.0.0.2", 00:36:05.812 "adrfam": "ipv4", 00:36:05.812 "trsvcid": "4420", 00:36:05.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:05.812 "hostaddr": "10.0.0.1", 00:36:05.812 "prchk_reftag": false, 00:36:05.812 "prchk_guard": false, 00:36:05.812 "hdgst": false, 00:36:05.812 "ddgst": false, 00:36:05.812 "multipath": "failover", 00:36:05.812 "allow_unrecognized_csi": false, 00:36:05.812 "method": "bdev_nvme_attach_controller", 00:36:05.812 "req_id": 1 00:36:05.812 } 00:36:05.812 Got JSON-RPC error response 00:36:05.812 response: 00:36:05.812 { 00:36:05.812 "code": -114, 00:36:05.812 "message": "A controller named NVMe0 already exists with the specified network path" 00:36:05.812 } 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:05.812 NVMe0n1 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@78 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@82 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.812 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:06.073 00:36:06.073 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.073 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:06.073 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # grep -c NVMe 00:36:06.073 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.073 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:06.073 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.073 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # '[' 2 '!=' 2 ']' 00:36:06.073 08:33:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:07.013 { 00:36:07.013 "results": [ 00:36:07.013 { 00:36:07.013 "job": "NVMe0n1", 00:36:07.013 "core_mask": "0x1", 00:36:07.013 "workload": "write", 00:36:07.013 "status": "finished", 00:36:07.013 "queue_depth": 128, 00:36:07.013 "io_size": 4096, 00:36:07.013 "runtime": 1.006119, 00:36:07.013 "iops": 27635.89595266564, 00:36:07.013 "mibps": 107.95271856510016, 00:36:07.013 "io_failed": 0, 00:36:07.013 "io_timeout": 0, 00:36:07.013 "avg_latency_us": 4617.624114607684, 00:36:07.013 "min_latency_us": 2921.8133333333335, 00:36:07.013 "max_latency_us": 14199.466666666667 00:36:07.013 } 00:36:07.013 ], 00:36:07.013 "core_count": 1 00:36:07.013 } 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@93 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # [[ -n '' ]] 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@111 -- # killprocess 2219670 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2219670 ']' 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2219670 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2219670 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2219670' 00:36:07.274 killing process with pid 2219670 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2219670 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2219670 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@114 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:36:07.274 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:36:07.274 [2024-11-20 08:33:09.253717] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:36:07.274 [2024-11-20 08:33:09.253774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219670 ] 00:36:07.274 [2024-11-20 08:33:09.331287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:07.274 [2024-11-20 08:33:09.367642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:07.274 [2024-11-20 08:33:10.612578] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 4b758c20-86ef-4cd9-8e8a-7a3f5ff6944e already exists 00:36:07.274 [2024-11-20 08:33:10.612608] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:4b758c20-86ef-4cd9-8e8a-7a3f5ff6944e alias for bdev NVMe1n1 00:36:07.274 [2024-11-20 08:33:10.612618] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:36:07.274 Running I/O for 1 seconds... 00:36:07.274 27629.00 IOPS, 107.93 MiB/s 00:36:07.274 Latency(us) 00:36:07.274 [2024-11-20T07:33:12.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:07.274 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:36:07.274 NVMe0n1 : 1.01 27635.90 107.95 0.00 0.00 4617.62 2921.81 14199.47 00:36:07.274 [2024-11-20T07:33:12.003Z] =================================================================================================================== 00:36:07.274 [2024-11-20T07:33:12.003Z] Total : 27635.90 107.95 0.00 0.00 4617.62 2921.81 14199.47 00:36:07.274 Received shutdown signal, test time was about 1.000000 seconds 00:36:07.274 00:36:07.274 Latency(us) 00:36:07.274 [2024-11-20T07:33:12.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:07.274 [2024-11-20T07:33:12.003Z] =================================================================================================================== 00:36:07.274 [2024-11-20T07:33:12.003Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:07.274 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:36:07.274 08:33:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # nvmftestfini 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # nvmfcleanup 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@99 -- # sync 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@102 -- # set +e 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@103 -- # for i in {1..20} 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:36:07.535 rmmod nvme_tcp 00:36:07.535 rmmod nvme_fabrics 00:36:07.535 rmmod nvme_keyring 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@106 -- # set -e 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@107 -- # return 0 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # '[' -n 2219424 ']' 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@337 -- # killprocess 2219424 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2219424 ']' 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2219424 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2219424 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2219424' 00:36:07.535 killing process with pid 2219424 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2219424 00:36:07.535 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2219424 00:36:07.796 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:36:07.796 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # nvmf_fini 00:36:07.796 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@254 -- # local dev 00:36:07.796 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@257 -- # remove_target_ns 00:36:07.796 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:07.796 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:07.796 08:33:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@258 -- # delete_main_bridge 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@121 -- # return 0 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # _dev=0 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # dev_map=() 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@274 -- # iptr 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # iptables-restore 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # iptables-save 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:36:09.708 00:36:09.708 real 0m15.148s 00:36:09.708 user 0m17.465s 00:36:09.708 sys 0m7.236s 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:09.708 ************************************ 00:36:09.708 END TEST nvmf_multicontroller 00:36:09.708 ************************************ 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # [[ tcp == \r\d\m\a ]] 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # [[ 0 -eq 1 ]] 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # [[ 0 -eq 1 ]] 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:36:09.708 00:36:09.708 real 7m16.176s 00:36:09.708 user 12m2.448s 00:36:09.708 sys 2m33.714s 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:09.708 08:33:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.708 ************************************ 00:36:09.708 END TEST nvmf_host 00:36:09.708 ************************************ 00:36:09.969 08:33:14 nvmf_tcp -- nvmf/nvmf.sh@15 -- # [[ tcp = \t\c\p ]] 00:36:09.969 08:33:14 nvmf_tcp -- nvmf/nvmf.sh@15 -- # [[ 0 -eq 0 ]] 00:36:09.969 08:33:14 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:09.969 08:33:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:09.969 08:33:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:09.969 08:33:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:09.969 ************************************ 00:36:09.969 START TEST nvmf_target_core_interrupt_mode 00:36:09.969 ************************************ 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:09.969 * Looking for test storage... 00:36:09.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:09.969 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:09.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.970 --rc genhtml_branch_coverage=1 00:36:09.970 --rc genhtml_function_coverage=1 00:36:09.970 --rc genhtml_legend=1 00:36:09.970 --rc geninfo_all_blocks=1 00:36:09.970 --rc geninfo_unexecuted_blocks=1 00:36:09.970 00:36:09.970 ' 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:09.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.970 --rc genhtml_branch_coverage=1 00:36:09.970 --rc genhtml_function_coverage=1 00:36:09.970 --rc genhtml_legend=1 00:36:09.970 --rc geninfo_all_blocks=1 00:36:09.970 --rc geninfo_unexecuted_blocks=1 00:36:09.970 00:36:09.970 ' 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:09.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.970 --rc genhtml_branch_coverage=1 00:36:09.970 --rc genhtml_function_coverage=1 00:36:09.970 --rc genhtml_legend=1 00:36:09.970 --rc geninfo_all_blocks=1 00:36:09.970 --rc geninfo_unexecuted_blocks=1 00:36:09.970 00:36:09.970 ' 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:09.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.970 --rc genhtml_branch_coverage=1 00:36:09.970 --rc genhtml_function_coverage=1 00:36:09.970 --rc genhtml_legend=1 00:36:09.970 --rc geninfo_all_blocks=1 00:36:09.970 --rc geninfo_unexecuted_blocks=1 00:36:09.970 00:36:09.970 ' 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.970 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@50 -- # : 0 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@13 -- # TEST_ARGS=("$@") 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@15 -- # [[ 0 -eq 0 ]] 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:10.231 ************************************ 00:36:10.231 START TEST nvmf_abort 00:36:10.231 ************************************ 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:10.231 * Looking for test storage... 00:36:10.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:10.231 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:10.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.232 --rc genhtml_branch_coverage=1 00:36:10.232 --rc genhtml_function_coverage=1 00:36:10.232 --rc genhtml_legend=1 00:36:10.232 --rc geninfo_all_blocks=1 00:36:10.232 --rc geninfo_unexecuted_blocks=1 00:36:10.232 00:36:10.232 ' 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:10.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.232 --rc genhtml_branch_coverage=1 00:36:10.232 --rc genhtml_function_coverage=1 00:36:10.232 --rc genhtml_legend=1 00:36:10.232 --rc geninfo_all_blocks=1 00:36:10.232 --rc geninfo_unexecuted_blocks=1 00:36:10.232 00:36:10.232 ' 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:10.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.232 --rc genhtml_branch_coverage=1 00:36:10.232 --rc genhtml_function_coverage=1 00:36:10.232 --rc genhtml_legend=1 00:36:10.232 --rc geninfo_all_blocks=1 00:36:10.232 --rc geninfo_unexecuted_blocks=1 00:36:10.232 00:36:10.232 ' 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:10.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.232 --rc genhtml_branch_coverage=1 00:36:10.232 --rc genhtml_function_coverage=1 00:36:10.232 --rc genhtml_legend=1 00:36:10.232 --rc geninfo_all_blocks=1 00:36:10.232 --rc geninfo_unexecuted_blocks=1 00:36:10.232 00:36:10.232 ' 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:10.232 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:10.493 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:10.494 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:10.494 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:10.494 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:36:10.494 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:10.494 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:36:10.494 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:36:10.494 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:36:10.494 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:10.494 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:10.494 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:10.494 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:36:10.494 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:36:10.494 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:36:10.494 08:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:18.645 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:18.645 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:36:18.645 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:36:18.645 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:36:18.645 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:36:18.645 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:36:18.645 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:36:18.645 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:18.646 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:18.646 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:18.646 Found net devices under 0000:31:00.0: cvl_0_0 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:18.646 Found net devices under 0000:31:00.1: cvl_0_1 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@247 -- # create_target_ns 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:36:18.646 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:36:18.647 10.0.0.1 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:36:18.647 10.0.0.2 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:36:18.647 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:18.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:18.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.542 ms 00:36:18.910 00:36:18.910 --- 10.0.0.1 ping statistics --- 00:36:18.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:18.910 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:36:18.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:18.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:36:18.910 00:36:18.910 --- 10.0.0.2 ping statistics --- 00:36:18.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:18.910 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:18.910 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=2225343 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 2225343 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2225343 ']' 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:18.911 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.173 [2024-11-20 08:33:23.678275] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:19.173 [2024-11-20 08:33:23.679423] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:36:19.173 [2024-11-20 08:33:23.679475] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:19.173 [2024-11-20 08:33:23.788874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:19.173 [2024-11-20 08:33:23.840414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:19.173 [2024-11-20 08:33:23.840472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:19.173 [2024-11-20 08:33:23.840481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:19.173 [2024-11-20 08:33:23.840487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:19.173 [2024-11-20 08:33:23.840493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:19.173 [2024-11-20 08:33:23.842327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:19.173 [2024-11-20 08:33:23.842494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.173 [2024-11-20 08:33:23.842494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:19.435 [2024-11-20 08:33:23.918666] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:19.435 [2024-11-20 08:33:23.918716] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:19.435 [2024-11-20 08:33:23.919256] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:19.435 [2024-11-20 08:33:23.919576] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.008 [2024-11-20 08:33:24.547389] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.008 Malloc0 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.008 Delay0 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.008 [2024-11-20 08:33:24.647354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.008 08:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:20.270 [2024-11-20 08:33:24.774471] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:22.181 Initializing NVMe Controllers 00:36:22.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:22.182 controller IO queue size 128 less than required 00:36:22.182 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:22.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:22.182 Initialization complete. Launching workers. 00:36:22.182 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29101 00:36:22.182 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29158, failed to submit 66 00:36:22.182 success 29101, unsuccessful 57, failed 0 00:36:22.182 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:22.182 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.182 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:22.182 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.182 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:22.182 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:22.182 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:36:22.182 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:36:22.182 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:36:22.182 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:36:22.182 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:36:22.182 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:36:22.182 rmmod nvme_tcp 00:36:22.182 rmmod nvme_fabrics 00:36:22.182 rmmod nvme_keyring 00:36:22.443 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:36:22.443 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:36:22.443 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:36:22.443 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 2225343 ']' 00:36:22.443 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 2225343 00:36:22.443 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2225343 ']' 00:36:22.443 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2225343 00:36:22.443 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:22.443 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:22.443 08:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2225343 00:36:22.443 08:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:22.443 08:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:22.443 08:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2225343' 00:36:22.443 killing process with pid 2225343 00:36:22.443 08:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2225343 00:36:22.443 08:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2225343 00:36:22.443 08:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:36:22.443 08:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:36:22.443 08:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@254 -- # local dev 00:36:22.443 08:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@257 -- # remove_target_ns 00:36:22.443 08:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:22.443 08:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:22.443 08:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@258 -- # delete_main_bridge 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@121 -- # return 0 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@274 -- # iptr 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # iptables-save 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # iptables-restore 00:36:24.992 00:36:24.992 real 0m14.498s 00:36:24.992 user 0m11.043s 00:36:24.992 sys 0m7.938s 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:24.992 ************************************ 00:36:24.992 END TEST nvmf_abort 00:36:24.992 ************************************ 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@17 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:24.992 ************************************ 00:36:24.992 START TEST nvmf_ns_hotplug_stress 00:36:24.992 ************************************ 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:24.992 * Looking for test storage... 00:36:24.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:24.992 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:24.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.993 --rc genhtml_branch_coverage=1 00:36:24.993 --rc genhtml_function_coverage=1 00:36:24.993 --rc genhtml_legend=1 00:36:24.993 --rc geninfo_all_blocks=1 00:36:24.993 --rc geninfo_unexecuted_blocks=1 00:36:24.993 00:36:24.993 ' 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:24.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.993 --rc genhtml_branch_coverage=1 00:36:24.993 --rc genhtml_function_coverage=1 00:36:24.993 --rc genhtml_legend=1 00:36:24.993 --rc geninfo_all_blocks=1 00:36:24.993 --rc geninfo_unexecuted_blocks=1 00:36:24.993 00:36:24.993 ' 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:24.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.993 --rc genhtml_branch_coverage=1 00:36:24.993 --rc genhtml_function_coverage=1 00:36:24.993 --rc genhtml_legend=1 00:36:24.993 --rc geninfo_all_blocks=1 00:36:24.993 --rc geninfo_unexecuted_blocks=1 00:36:24.993 00:36:24.993 ' 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:24.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.993 --rc genhtml_branch_coverage=1 00:36:24.993 --rc genhtml_function_coverage=1 00:36:24.993 --rc genhtml_legend=1 00:36:24.993 --rc geninfo_all_blocks=1 00:36:24.993 --rc geninfo_unexecuted_blocks=1 00:36:24.993 00:36:24.993 ' 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:24.993 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:24.994 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:24.994 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:36:24.994 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:24.994 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:36:24.994 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:36:24.994 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:36:24.994 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:24.994 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:24.994 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:24.994 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:36:24.994 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:36:24.994 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:36:24.994 08:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:33.148 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:33.148 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:33.148 Found net devices under 0000:31:00.0: cvl_0_0 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:33.148 Found net devices under 0000:31:00.1: cvl_0_1 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:36:33.148 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@247 -- # create_target_ns 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:36:33.149 10.0.0.1 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:36:33.149 10.0.0.2 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:36:33.149 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:36:33.412 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:36:33.412 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:36:33.412 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:33.412 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:33.412 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:36:33.412 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:36:33.412 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:36:33.412 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:33.412 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:36:33.412 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:33.412 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:33.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:33.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.582 ms 00:36:33.413 00:36:33.413 --- 10.0.0.1 ping statistics --- 00:36:33.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.413 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:36:33.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:33.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:36:33.413 00:36:33.413 --- 10.0.0.2 ping statistics --- 00:36:33.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.413 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:33.413 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:36:33.414 08:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=2230729 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 2230729 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2230729 ']' 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:33.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:33.414 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:33.414 [2024-11-20 08:33:38.112696] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:33.414 [2024-11-20 08:33:38.114547] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:36:33.414 [2024-11-20 08:33:38.114631] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:33.676 [2024-11-20 08:33:38.224450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:33.676 [2024-11-20 08:33:38.275983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:33.676 [2024-11-20 08:33:38.276037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:33.676 [2024-11-20 08:33:38.276046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:33.676 [2024-11-20 08:33:38.276057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:33.676 [2024-11-20 08:33:38.276065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:33.676 [2024-11-20 08:33:38.277913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:33.676 [2024-11-20 08:33:38.278128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:33.676 [2024-11-20 08:33:38.278232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:33.676 [2024-11-20 08:33:38.355415] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:33.676 [2024-11-20 08:33:38.355504] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:33.676 [2024-11-20 08:33:38.356254] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:33.676 [2024-11-20 08:33:38.356502] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:34.250 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:34.250 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:34.250 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:34.250 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:34.250 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:34.512 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:34.512 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:34.512 08:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:34.512 [2024-11-20 08:33:39.135187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:34.512 08:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:34.772 08:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:34.772 [2024-11-20 08:33:39.467953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:34.772 08:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:35.033 08:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:35.294 Malloc0 00:36:35.294 08:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:35.294 Delay0 00:36:35.294 08:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.555 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:35.849 NULL1 00:36:35.849 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:35.849 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2231300 00:36:35.849 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:35.849 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:35.849 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.156 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.463 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:36.463 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:36.463 true 00:36:36.463 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:36.464 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.725 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.987 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:36.987 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:36.987 true 00:36:36.987 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:36.987 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.248 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.509 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:37.509 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:37.509 true 00:36:37.509 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:37.509 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.769 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.030 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:38.030 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:38.030 true 00:36:38.292 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:38.292 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.292 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.553 08:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:38.553 08:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:38.813 true 00:36:38.813 08:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:38.814 08:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.814 08:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.074 08:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:39.074 08:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:39.335 true 00:36:39.335 08:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:39.335 08:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.595 08:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.595 08:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:39.595 08:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:39.857 true 00:36:39.857 08:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:39.857 08:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.118 08:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.118 08:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:40.118 08:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:40.378 true 00:36:40.378 08:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:40.378 08:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.639 08:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.900 08:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:40.900 08:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:40.900 true 00:36:40.900 08:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:40.900 08:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.162 08:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.423 08:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:41.423 08:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:41.423 true 00:36:41.423 08:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:41.423 08:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.683 08:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.944 08:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:41.944 08:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:41.944 true 00:36:41.944 08:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:41.944 08:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.204 08:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.465 08:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:42.465 08:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:42.465 true 00:36:42.726 08:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:42.726 08:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.726 08:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.987 08:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:42.987 08:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:42.987 true 00:36:43.247 08:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:43.247 08:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.247 08:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.508 08:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:43.508 08:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:43.768 true 00:36:43.768 08:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:43.768 08:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.768 08:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.028 08:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:44.028 08:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:44.289 true 00:36:44.289 08:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:44.289 08:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.289 08:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.550 08:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:44.550 08:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:44.810 true 00:36:44.810 08:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:44.810 08:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.071 08:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.071 08:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:45.071 08:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:45.332 true 00:36:45.332 08:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:45.332 08:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.593 08:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.593 08:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:45.593 08:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:45.853 true 00:36:45.853 08:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:45.853 08:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.113 08:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.373 08:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:46.373 08:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:46.373 true 00:36:46.373 08:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:46.373 08:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.634 08:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.894 08:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:46.894 08:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:46.894 true 00:36:46.894 08:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:46.894 08:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.155 08:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.415 08:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:47.415 08:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:47.415 true 00:36:47.675 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:47.675 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.675 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.935 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:47.935 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:48.197 true 00:36:48.197 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:48.197 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.197 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.458 08:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:48.458 08:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:48.718 true 00:36:48.718 08:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:48.718 08:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.978 08:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.978 08:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:48.978 08:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:49.238 true 00:36:49.238 08:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:49.238 08:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.499 08:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:49.499 08:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:49.499 08:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:49.759 true 00:36:49.759 08:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:49.759 08:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.020 08:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.020 08:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:50.020 08:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:50.281 true 00:36:50.281 08:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:50.281 08:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.541 08:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.803 08:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:50.803 08:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:50.803 true 00:36:50.803 08:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:50.803 08:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.064 08:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.325 08:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:51.325 08:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:51.325 true 00:36:51.325 08:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:51.325 08:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.585 08:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.846 08:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:36:51.846 08:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:36:51.846 true 00:36:51.846 08:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:51.846 08:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.107 08:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.367 08:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:36:52.367 08:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:36:52.367 true 00:36:52.627 08:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:52.627 08:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.627 08:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.887 08:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:36:52.887 08:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:36:52.887 true 00:36:53.149 08:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:53.149 08:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.149 08:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.411 08:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:36:53.411 08:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:36:53.672 true 00:36:53.672 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:53.672 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.672 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.933 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:36:53.933 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:36:54.194 true 00:36:54.194 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:54.194 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.454 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:54.454 08:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:36:54.454 08:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:36:54.715 true 00:36:54.715 08:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:54.715 08:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.975 08:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:54.975 08:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:36:54.975 08:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:36:55.236 true 00:36:55.236 08:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:55.236 08:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.496 08:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.496 08:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:36:55.496 08:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:36:55.758 true 00:36:55.758 08:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:55.758 08:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.018 08:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:56.279 08:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:36:56.279 08:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:36:56.279 true 00:36:56.279 08:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:56.279 08:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.539 08:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:56.799 08:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:36:56.799 08:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:36:56.799 true 00:36:56.799 08:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:56.799 08:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.059 08:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:57.319 08:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:36:57.319 08:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:36:57.319 true 00:36:57.319 08:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:57.319 08:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.580 08:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:57.840 08:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:36:57.840 08:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:36:57.840 true 00:36:58.101 08:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:58.101 08:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.101 08:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:58.361 08:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:36:58.362 08:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:36:58.623 true 00:36:58.623 08:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:58.623 08:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.623 08:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:58.884 08:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:36:58.884 08:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:36:59.145 true 00:36:59.145 08:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:59.145 08:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.145 08:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:59.407 08:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:36:59.407 08:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:36:59.668 true 00:36:59.668 08:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:36:59.668 08:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.929 08:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:59.929 08:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:36:59.929 08:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:37:00.190 true 00:37:00.190 08:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:37:00.190 08:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:00.450 08:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:00.450 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:37:00.450 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:37:00.711 true 00:37:00.711 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:37:00.711 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:00.972 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:00.972 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:37:00.972 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:37:01.232 true 00:37:01.232 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:37:01.232 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.493 08:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:01.753 08:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:37:01.753 08:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:37:01.753 true 00:37:01.753 08:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:37:01.753 08:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.014 08:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.274 08:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:37:02.274 08:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:37:02.274 true 00:37:02.274 08:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:37:02.274 08:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.535 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.795 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:37:02.795 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:37:02.795 true 00:37:02.795 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:37:02.795 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.055 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:03.314 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:37:03.314 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:37:03.575 true 00:37:03.575 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:37:03.575 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.575 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:03.836 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:37:03.836 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:37:04.096 true 00:37:04.096 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:37:04.096 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.096 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:04.358 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:37:04.358 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:37:04.618 true 00:37:04.618 08:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:37:04.618 08:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.878 08:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:04.878 08:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:37:04.878 08:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:37:05.138 true 00:37:05.138 08:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:37:05.138 08:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.399 08:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:05.399 08:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:37:05.399 08:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:37:05.659 true 00:37:05.659 08:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:37:05.659 08:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.920 08:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:05.920 08:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:37:05.920 08:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:37:06.181 true 00:37:06.181 Initializing NVMe Controllers 00:37:06.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:06.181 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:37:06.181 Controller IO queue size 128, less than required. 00:37:06.181 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:06.181 WARNING: Some requested NVMe devices were skipped 00:37:06.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:06.181 Initialization complete. Launching workers. 00:37:06.182 ======================================================== 00:37:06.182 Latency(us) 00:37:06.182 Device Information : IOPS MiB/s Average min max 00:37:06.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29915.63 14.61 4278.50 1489.95 10727.68 00:37:06.182 ======================================================== 00:37:06.182 Total : 29915.63 14.61 4278.50 1489.95 10727.68 00:37:06.182 00:37:06.182 08:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231300 00:37:06.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2231300) - No such process 00:37:06.182 08:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2231300 00:37:06.182 08:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:06.443 08:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:06.704 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:06.704 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:06.704 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:06.704 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:06.704 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:06.704 null0 00:37:06.704 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:06.704 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:06.704 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:06.965 null1 00:37:06.965 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:06.965 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:06.965 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:06.965 null2 00:37:06.965 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:06.965 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:06.965 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:07.225 null3 00:37:07.225 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:07.225 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:07.225 08:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:07.486 null4 00:37:07.486 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:07.486 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:07.486 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:07.486 null5 00:37:07.486 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:07.486 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:07.486 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:07.746 null6 00:37:07.746 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:07.746 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:07.746 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:08.008 null7 00:37:08.008 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:08.008 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:08.008 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:08.008 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.008 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.008 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:08.008 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.008 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:08.008 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.008 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.008 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.008 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:08.008 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.008 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2237322 2237324 2237327 2237330 2237333 2237336 2237339 2237342 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:08.009 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.269 08:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:08.529 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:08.529 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:08.529 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:08.529 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:08.529 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:08.529 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:08.529 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:08.529 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:08.529 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.529 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.529 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:08.788 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.788 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.788 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:08.788 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.788 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.788 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:08.788 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.788 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.788 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.788 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.788 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:08.788 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:08.788 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.788 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.789 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:08.789 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.789 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.789 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:08.789 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.789 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.789 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:08.789 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:08.789 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:08.789 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:08.789 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:08.789 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:08.789 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.049 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:09.309 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:09.309 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:09.309 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:09.309 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:09.309 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.309 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.309 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.309 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:09.309 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:09.309 08:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:09.309 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.309 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.309 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:09.569 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:09.829 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:09.830 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.830 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.830 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:09.830 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.830 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.830 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:10.090 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:10.351 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:10.612 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.873 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.134 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.395 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:11.395 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:11.395 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:11.395 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:11.395 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.395 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.395 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.395 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:11.395 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.395 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:11.395 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:11.395 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:11.395 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.395 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.395 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.395 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.395 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:11.395 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.395 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.395 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:11.395 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.395 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.395 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:11.395 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:11.395 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:11.657 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.657 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.658 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.658 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.658 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:11.658 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:11.658 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.658 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.658 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.658 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.658 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:37:11.919 rmmod nvme_tcp 00:37:11.919 rmmod nvme_fabrics 00:37:11.919 rmmod nvme_keyring 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 2230729 ']' 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 2230729 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2230729 ']' 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2230729 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2230729 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2230729' 00:37:11.919 killing process with pid 2230729 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2230729 00:37:11.919 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2230729 00:37:12.180 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:37:12.180 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:37:12.180 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@254 -- # local dev 00:37:12.180 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:37:12.180 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:12.180 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:12.180 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # return 0 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@274 -- # iptr 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-save 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-restore 00:37:14.319 00:37:14.319 real 0m49.562s 00:37:14.319 user 3m3.317s 00:37:14.319 sys 0m22.354s 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:14.319 ************************************ 00:37:14.319 END TEST nvmf_ns_hotplug_stress 00:37:14.319 ************************************ 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:14.319 ************************************ 00:37:14.319 START TEST nvmf_delete_subsystem 00:37:14.319 ************************************ 00:37:14.319 08:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:14.319 * Looking for test storage... 00:37:14.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:14.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.583 --rc genhtml_branch_coverage=1 00:37:14.583 --rc genhtml_function_coverage=1 00:37:14.583 --rc genhtml_legend=1 00:37:14.583 --rc geninfo_all_blocks=1 00:37:14.583 --rc geninfo_unexecuted_blocks=1 00:37:14.583 00:37:14.583 ' 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:14.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.583 --rc genhtml_branch_coverage=1 00:37:14.583 --rc genhtml_function_coverage=1 00:37:14.583 --rc genhtml_legend=1 00:37:14.583 --rc geninfo_all_blocks=1 00:37:14.583 --rc geninfo_unexecuted_blocks=1 00:37:14.583 00:37:14.583 ' 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:14.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.583 --rc genhtml_branch_coverage=1 00:37:14.583 --rc genhtml_function_coverage=1 00:37:14.583 --rc genhtml_legend=1 00:37:14.583 --rc geninfo_all_blocks=1 00:37:14.583 --rc geninfo_unexecuted_blocks=1 00:37:14.583 00:37:14.583 ' 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:14.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.583 --rc genhtml_branch_coverage=1 00:37:14.583 --rc genhtml_function_coverage=1 00:37:14.583 --rc genhtml_legend=1 00:37:14.583 --rc geninfo_all_blocks=1 00:37:14.583 --rc geninfo_unexecuted_blocks=1 00:37:14.583 00:37:14.583 ' 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.583 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:37:14.584 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:22.729 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:22.729 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:37:22.729 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:22.730 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:22.730 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:22.730 Found net devices under 0000:31:00.0: cvl_0_0 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:22.730 Found net devices under 0000:31:00.1: cvl_0_1 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@247 -- # create_target_ns 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:22.730 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:37:22.731 10.0.0.1 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:22.731 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:37:22.991 10.0.0.2 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:37:22.991 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:37:22.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:22.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.547 ms 00:37:22.992 00:37:22.992 --- 10.0.0.1 ping statistics --- 00:37:22.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:22.992 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:37:22.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:22.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:37:22.992 00:37:22.992 --- 10.0.0.2 ping statistics --- 00:37:22.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:22.992 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:22.992 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:23.252 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:37:23.252 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:37:23.252 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:37:23.252 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:37:23.252 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:23.252 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:23.252 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=2243155 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 2243155 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2243155 ']' 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:23.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:23.253 08:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:23.253 [2024-11-20 08:34:27.821960] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:23.253 [2024-11-20 08:34:27.822977] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:37:23.253 [2024-11-20 08:34:27.823015] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:23.253 [2024-11-20 08:34:27.909944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:23.253 [2024-11-20 08:34:27.945871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:23.253 [2024-11-20 08:34:27.945905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:23.253 [2024-11-20 08:34:27.945913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:23.253 [2024-11-20 08:34:27.945924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:23.253 [2024-11-20 08:34:27.945930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:23.253 [2024-11-20 08:34:27.947188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:23.253 [2024-11-20 08:34:27.947190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:23.513 [2024-11-20 08:34:28.001639] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:23.513 [2024-11-20 08:34:28.002139] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:23.513 [2024-11-20 08:34:28.002481] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:24.087 [2024-11-20 08:34:28.671740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:24.087 [2024-11-20 08:34:28.700077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:24.087 NULL1 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:24.087 Delay0 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2243209 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:24.087 08:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:24.087 [2024-11-20 08:34:28.797492] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:26.635 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:26.635 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.635 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 starting I/O failed: -6 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 starting I/O failed: -6 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 starting I/O failed: -6 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 starting I/O failed: -6 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 starting I/O failed: -6 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 starting I/O failed: -6 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 starting I/O failed: -6 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 starting I/O failed: -6 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 starting I/O failed: -6 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 starting I/O failed: -6 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 starting I/O failed: -6 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 [2024-11-20 08:34:30.917325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11922c0 is same with the state(6) to be set 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Read completed with error (sct=0, sc=8) 00:37:26.635 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 starting I/O failed: -6 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 starting I/O failed: -6 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 starting I/O failed: -6 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 starting I/O failed: -6 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 starting I/O failed: -6 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 starting I/O failed: -6 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 starting I/O failed: -6 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 starting I/O failed: -6 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 starting I/O failed: -6 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 starting I/O failed: -6 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 [2024-11-20 08:34:30.922018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f74f400d4b0 is same with the state(6) to be set 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Write completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:26.636 Read completed with error (sct=0, sc=8) 00:37:27.207 [2024-11-20 08:34:31.896016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11935e0 is same with the state(6) to be set 00:37:27.207 Write completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Write completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Write completed with error (sct=0, sc=8) 00:37:27.207 Write completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Write completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Write completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Write completed with error (sct=0, sc=8) 00:37:27.207 Read completed with error (sct=0, sc=8) 00:37:27.207 Write completed with error (sct=0, sc=8) 00:37:27.208 [2024-11-20 08:34:31.920781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11924a0 is same with the state(6) to be set 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 [2024-11-20 08:34:31.921269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11920e0 is same with the state(6) to be set 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 [2024-11-20 08:34:31.923317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f74f400d020 is same with the state(6) to be set 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 Write completed with error (sct=0, sc=8) 00:37:27.208 Read completed with error (sct=0, sc=8) 00:37:27.208 [2024-11-20 08:34:31.924198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f74f400d7e0 is same with the state(6) to be set 00:37:27.208 Initializing NVMe Controllers 00:37:27.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:27.208 Controller IO queue size 128, less than required. 00:37:27.208 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:27.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:27.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:27.208 Initialization complete. Launching workers. 00:37:27.208 ======================================================== 00:37:27.208 Latency(us) 00:37:27.208 Device Information : IOPS MiB/s Average min max 00:37:27.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.30 0.08 887579.84 220.43 1006978.43 00:37:27.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.35 0.08 922981.67 279.84 1010188.91 00:37:27.208 ======================================================== 00:37:27.208 Total : 330.65 0.16 904534.33 220.43 1010188.91 00:37:27.208 00:37:27.208 [2024-11-20 08:34:31.924728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11935e0 (9): Bad file descriptor 00:37:27.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:27.208 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.208 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:27.208 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2243209 00:37:27.208 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2243209 00:37:27.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2243209) - No such process 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2243209 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2243209 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2243209 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:27.780 [2024-11-20 08:34:32.459936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:27.780 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.781 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2243920 00:37:27.781 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:27.781 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:27.781 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243920 00:37:27.781 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:28.041 [2024-11-20 08:34:32.534888] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:28.302 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:28.302 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243920 00:37:28.302 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:28.874 08:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:28.874 08:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243920 00:37:28.874 08:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:29.446 08:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:29.446 08:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243920 00:37:29.446 08:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:30.018 08:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:30.018 08:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243920 00:37:30.018 08:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:30.278 08:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:30.278 08:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243920 00:37:30.278 08:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:30.849 08:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:30.849 08:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243920 00:37:30.849 08:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:31.112 Initializing NVMe Controllers 00:37:31.112 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:31.112 Controller IO queue size 128, less than required. 00:37:31.112 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:31.112 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:31.112 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:31.112 Initialization complete. Launching workers. 00:37:31.112 ======================================================== 00:37:31.112 Latency(us) 00:37:31.112 Device Information : IOPS MiB/s Average min max 00:37:31.112 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002481.98 1000195.38 1006847.73 00:37:31.112 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004099.71 1000346.19 1010661.10 00:37:31.112 ======================================================== 00:37:31.112 Total : 256.00 0.12 1003290.85 1000195.38 1010661.10 00:37:31.112 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243920 00:37:31.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2243920) - No such process 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2243920 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:37:31.373 rmmod nvme_tcp 00:37:31.373 rmmod nvme_fabrics 00:37:31.373 rmmod nvme_keyring 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 2243155 ']' 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 2243155 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2243155 ']' 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2243155 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:31.373 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2243155 00:37:31.634 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:31.634 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:31.634 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2243155' 00:37:31.634 killing process with pid 2243155 00:37:31.634 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2243155 00:37:31.634 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2243155 00:37:31.635 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:37:31.635 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:37:31.635 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@254 -- # local dev 00:37:31.635 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:37:31.635 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:31.635 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:31.635 08:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:34.182 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:37:34.182 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:37:34.182 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # return 0 00:37:34.182 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:34.182 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:37:34.182 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:34.182 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:37:34.182 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:37:34.182 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@274 -- # iptr 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-save 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-restore 00:37:34.183 00:37:34.183 real 0m19.416s 00:37:34.183 user 0m26.665s 00:37:34.183 sys 0m8.200s 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:34.183 ************************************ 00:37:34.183 END TEST nvmf_delete_subsystem 00:37:34.183 ************************************ 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:34.183 ************************************ 00:37:34.183 START TEST nvmf_host_management 00:37:34.183 ************************************ 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:34.183 * Looking for test storage... 00:37:34.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:34.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.183 --rc genhtml_branch_coverage=1 00:37:34.183 --rc genhtml_function_coverage=1 00:37:34.183 --rc genhtml_legend=1 00:37:34.183 --rc geninfo_all_blocks=1 00:37:34.183 --rc geninfo_unexecuted_blocks=1 00:37:34.183 00:37:34.183 ' 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:34.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.183 --rc genhtml_branch_coverage=1 00:37:34.183 --rc genhtml_function_coverage=1 00:37:34.183 --rc genhtml_legend=1 00:37:34.183 --rc geninfo_all_blocks=1 00:37:34.183 --rc geninfo_unexecuted_blocks=1 00:37:34.183 00:37:34.183 ' 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:34.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.183 --rc genhtml_branch_coverage=1 00:37:34.183 --rc genhtml_function_coverage=1 00:37:34.183 --rc genhtml_legend=1 00:37:34.183 --rc geninfo_all_blocks=1 00:37:34.183 --rc geninfo_unexecuted_blocks=1 00:37:34.183 00:37:34.183 ' 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:34.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.183 --rc genhtml_branch_coverage=1 00:37:34.183 --rc genhtml_function_coverage=1 00:37:34.183 --rc genhtml_legend=1 00:37:34.183 --rc geninfo_all_blocks=1 00:37:34.183 --rc geninfo_unexecuted_blocks=1 00:37:34.183 00:37:34.183 ' 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:37:34.183 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:37:34.184 08:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:42.332 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:42.332 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:42.332 Found net devices under 0000:31:00.0: cvl_0_0 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:42.332 Found net devices under 0000:31:00.1: cvl_0_1 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@247 -- # create_target_ns 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:37:42.332 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:37:42.333 10.0.0.1 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:37:42.333 10.0.0.2 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:37:42.333 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:37:42.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:42.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.528 ms 00:37:42.334 00:37:42.334 --- 10.0.0.1 ping statistics --- 00:37:42.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:42.334 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:37:42.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:42.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:37:42.334 00:37:42.334 --- 10.0.0.2 ping statistics --- 00:37:42.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:42.334 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:42.334 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:37:42.335 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:37:42.335 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:42.335 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:37:42.335 08:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:37:42.335 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:42.335 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:42.335 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:42.335 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:37:42.335 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:42.335 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:42.335 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=2249266 00:37:42.335 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 2249266 00:37:42.335 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:42.335 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2249266 ']' 00:37:42.335 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:42.335 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:42.335 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:42.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:42.335 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:42.335 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:42.597 [2024-11-20 08:34:47.078614] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:42.597 [2024-11-20 08:34:47.079729] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:37:42.597 [2024-11-20 08:34:47.079779] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:42.597 [2024-11-20 08:34:47.186783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:42.597 [2024-11-20 08:34:47.233995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:42.597 [2024-11-20 08:34:47.234045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:42.597 [2024-11-20 08:34:47.234054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:42.597 [2024-11-20 08:34:47.234061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:42.597 [2024-11-20 08:34:47.234067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:42.597 [2024-11-20 08:34:47.235975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:42.597 [2024-11-20 08:34:47.236276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:42.597 [2024-11-20 08:34:47.236437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:42.597 [2024-11-20 08:34:47.236437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:42.597 [2024-11-20 08:34:47.309435] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:42.597 [2024-11-20 08:34:47.310082] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:42.597 [2024-11-20 08:34:47.310914] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:42.597 [2024-11-20 08:34:47.310988] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:42.597 [2024-11-20 08:34:47.311191] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:43.169 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:43.169 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:43.169 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:37:43.169 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:43.169 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.432 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:43.432 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:43.432 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.432 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.432 [2024-11-20 08:34:47.929432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:43.432 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.432 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:43.432 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:43.432 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.432 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:43.432 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:43.432 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:43.432 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.432 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.432 Malloc0 00:37:43.432 [2024-11-20 08:34:48.017682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2249609 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2249609 /var/tmp/bdevperf.sock 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2249609 ']' 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:43.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:43.432 { 00:37:43.432 "params": { 00:37:43.432 "name": "Nvme$subsystem", 00:37:43.432 "trtype": "$TEST_TRANSPORT", 00:37:43.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:43.432 "adrfam": "ipv4", 00:37:43.432 "trsvcid": "$NVMF_PORT", 00:37:43.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:43.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:43.432 "hdgst": ${hdgst:-false}, 00:37:43.432 "ddgst": ${ddgst:-false} 00:37:43.432 }, 00:37:43.432 "method": "bdev_nvme_attach_controller" 00:37:43.432 } 00:37:43.432 EOF 00:37:43.432 )") 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:37:43.432 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:43.432 "params": { 00:37:43.432 "name": "Nvme0", 00:37:43.432 "trtype": "tcp", 00:37:43.432 "traddr": "10.0.0.2", 00:37:43.432 "adrfam": "ipv4", 00:37:43.432 "trsvcid": "4420", 00:37:43.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:43.432 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:43.432 "hdgst": false, 00:37:43.432 "ddgst": false 00:37:43.432 }, 00:37:43.432 "method": "bdev_nvme_attach_controller" 00:37:43.432 }' 00:37:43.432 [2024-11-20 08:34:48.120280] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:37:43.433 [2024-11-20 08:34:48.120334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249609 ] 00:37:43.694 [2024-11-20 08:34:48.198303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:43.694 [2024-11-20 08:34:48.234370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:43.694 Running I/O for 10 seconds... 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=892 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 892 -ge 100 ']' 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.267 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:44.268 [2024-11-20 08:34:48.985172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623800 is same with the state(6) to be set 00:37:44.268 [2024-11-20 08:34:48.985602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.985991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.985998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.268 [2024-11-20 08:34:48.986008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.268 [2024-11-20 08:34:48.986015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.269 [2024-11-20 08:34:48.986667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.269 [2024-11-20 08:34:48.986674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.270 [2024-11-20 08:34:48.986683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.270 [2024-11-20 08:34:48.986690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.270 [2024-11-20 08:34:48.986700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.270 [2024-11-20 08:34:48.986707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.270 [2024-11-20 08:34:48.986716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.270 [2024-11-20 08:34:48.986724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.270 [2024-11-20 08:34:48.986733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.270 [2024-11-20 08:34:48.986741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.270 [2024-11-20 08:34:48.988012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:44.270 task offset: 123008 on job bdev=Nvme0n1 fails 00:37:44.270 00:37:44.270 Latency(us) 00:37:44.270 [2024-11-20T07:34:48.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.270 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:44.270 Job: Nvme0n1 ended in about 0.57 seconds with error 00:37:44.270 Verification LBA range: start 0x0 length 0x400 00:37:44.270 Nvme0n1 : 0.57 1682.75 105.17 112.07 0.00 34760.71 3126.61 36700.16 00:37:44.270 [2024-11-20T07:34:48.999Z] =================================================================================================================== 00:37:44.270 [2024-11-20T07:34:48.999Z] Total : 1682.75 105.17 112.07 0.00 34760.71 3126.61 36700.16 00:37:44.270 [2024-11-20 08:34:48.990024] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:44.270 [2024-11-20 08:34:48.990048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1499b00 (9): Bad file descriptor 00:37:44.270 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.270 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:44.270 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.270 [2024-11-20 08:34:48.991141] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:37:44.270 [2024-11-20 08:34:48.991214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:37:44.270 [2024-11-20 08:34:48.991234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.270 [2024-11-20 08:34:48.991249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:37:44.270 [2024-11-20 08:34:48.991257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:37:44.270 [2024-11-20 08:34:48.991264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.270 [2024-11-20 08:34:48.991271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1499b00 00:37:44.270 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:44.270 [2024-11-20 08:34:48.991290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1499b00 (9): Bad file descriptor 00:37:44.270 [2024-11-20 08:34:48.991303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:44.270 [2024-11-20 08:34:48.991310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:44.270 [2024-11-20 08:34:48.991319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:44.270 [2024-11-20 08:34:48.991329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:44.532 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.532 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:45.476 08:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2249609 00:37:45.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2249609) - No such process 00:37:45.476 08:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:45.476 08:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:45.476 08:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:45.476 08:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:45.476 08:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:37:45.476 08:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:37:45.476 08:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:45.476 08:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:45.476 { 00:37:45.476 "params": { 00:37:45.476 "name": "Nvme$subsystem", 00:37:45.476 "trtype": "$TEST_TRANSPORT", 00:37:45.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:45.476 "adrfam": "ipv4", 00:37:45.476 "trsvcid": "$NVMF_PORT", 00:37:45.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:45.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:45.476 "hdgst": ${hdgst:-false}, 00:37:45.476 "ddgst": ${ddgst:-false} 00:37:45.476 }, 00:37:45.476 "method": "bdev_nvme_attach_controller" 00:37:45.476 } 00:37:45.476 EOF 00:37:45.476 )") 00:37:45.476 08:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:37:45.476 08:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:37:45.476 08:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:37:45.476 08:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:45.476 "params": { 00:37:45.476 "name": "Nvme0", 00:37:45.476 "trtype": "tcp", 00:37:45.476 "traddr": "10.0.0.2", 00:37:45.476 "adrfam": "ipv4", 00:37:45.476 "trsvcid": "4420", 00:37:45.476 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:45.476 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:45.476 "hdgst": false, 00:37:45.476 "ddgst": false 00:37:45.476 }, 00:37:45.476 "method": "bdev_nvme_attach_controller" 00:37:45.476 }' 00:37:45.476 [2024-11-20 08:34:50.062782] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:37:45.476 [2024-11-20 08:34:50.062841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249964 ] 00:37:45.476 [2024-11-20 08:34:50.139451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.476 [2024-11-20 08:34:50.175146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:45.737 Running I/O for 1 seconds... 00:37:46.681 1974.00 IOPS, 123.38 MiB/s 00:37:46.681 Latency(us) 00:37:46.681 [2024-11-20T07:34:51.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.681 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:46.681 Verification LBA range: start 0x0 length 0x400 00:37:46.681 Nvme0n1 : 1.02 2002.12 125.13 0.00 0.00 31278.86 4505.60 37137.07 00:37:46.681 [2024-11-20T07:34:51.410Z] =================================================================================================================== 00:37:46.681 [2024-11-20T07:34:51.410Z] Total : 2002.12 125.13 0.00 0.00 31278.86 4505.60 37137.07 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:37:46.942 rmmod nvme_tcp 00:37:46.942 rmmod nvme_fabrics 00:37:46.942 rmmod nvme_keyring 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 2249266 ']' 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 2249266 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2249266 ']' 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2249266 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2249266 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2249266' 00:37:46.942 killing process with pid 2249266 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2249266 00:37:46.942 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2249266 00:37:47.203 [2024-11-20 08:34:51.735672] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:47.203 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:37:47.203 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:37:47.203 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@254 -- # local dev 00:37:47.203 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@257 -- # remove_target_ns 00:37:47.203 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:47.203 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:47.203 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@258 -- # delete_main_bridge 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@121 -- # return 0 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:37:49.119 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@274 -- # iptr 00:37:49.380 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-save 00:37:49.380 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:37:49.380 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-restore 00:37:49.380 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:49.380 00:37:49.380 real 0m15.405s 00:37:49.380 user 0m18.887s 00:37:49.380 sys 0m8.164s 00:37:49.380 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:49.380 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:49.380 ************************************ 00:37:49.380 END TEST nvmf_host_management 00:37:49.380 ************************************ 00:37:49.380 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:49.380 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:49.380 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:49.380 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:49.380 ************************************ 00:37:49.380 START TEST nvmf_lvol 00:37:49.380 ************************************ 00:37:49.380 08:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:49.380 * Looking for test storage... 00:37:49.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:49.380 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:49.380 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:49.380 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:49.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.642 --rc genhtml_branch_coverage=1 00:37:49.642 --rc genhtml_function_coverage=1 00:37:49.642 --rc genhtml_legend=1 00:37:49.642 --rc geninfo_all_blocks=1 00:37:49.642 --rc geninfo_unexecuted_blocks=1 00:37:49.642 00:37:49.642 ' 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:49.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.642 --rc genhtml_branch_coverage=1 00:37:49.642 --rc genhtml_function_coverage=1 00:37:49.642 --rc genhtml_legend=1 00:37:49.642 --rc geninfo_all_blocks=1 00:37:49.642 --rc geninfo_unexecuted_blocks=1 00:37:49.642 00:37:49.642 ' 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:49.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.642 --rc genhtml_branch_coverage=1 00:37:49.642 --rc genhtml_function_coverage=1 00:37:49.642 --rc genhtml_legend=1 00:37:49.642 --rc geninfo_all_blocks=1 00:37:49.642 --rc geninfo_unexecuted_blocks=1 00:37:49.642 00:37:49.642 ' 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:49.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.642 --rc genhtml_branch_coverage=1 00:37:49.642 --rc genhtml_function_coverage=1 00:37:49.642 --rc genhtml_legend=1 00:37:49.642 --rc geninfo_all_blocks=1 00:37:49.642 --rc geninfo_unexecuted_blocks=1 00:37:49.642 00:37:49.642 ' 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:49.642 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:37:49.643 08:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:37:57.808 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:57.809 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:57.809 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:57.809 Found net devices under 0000:31:00.0: cvl_0_0 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:57.809 Found net devices under 0000:31:00.1: cvl_0_1 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@247 -- # create_target_ns 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:37:57.809 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:37:57.810 10.0.0.1 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:37:57.810 10.0.0.2 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:37:57.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:57.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.512 ms 00:37:57.810 00:37:57.810 --- 10.0.0.1 ping statistics --- 00:37:57.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:57.810 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:37:57.810 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:37:57.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:57.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:37:57.811 00:37:57.811 --- 10.0.0.2 ping statistics --- 00:37:57.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:57.811 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:37:57.811 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:37:57.811 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:57.811 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:57.811 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:37:57.811 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:37:57.811 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:58.072 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=2254998 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 2254998 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2254998 ']' 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:58.073 08:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:58.073 [2024-11-20 08:35:02.691188] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:58.073 [2024-11-20 08:35:02.692195] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:37:58.073 [2024-11-20 08:35:02.692236] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:58.073 [2024-11-20 08:35:02.776782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:58.333 [2024-11-20 08:35:02.812633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:58.333 [2024-11-20 08:35:02.812667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:58.333 [2024-11-20 08:35:02.812675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:58.333 [2024-11-20 08:35:02.812685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:58.333 [2024-11-20 08:35:02.812691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:58.333 [2024-11-20 08:35:02.813929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:58.333 [2024-11-20 08:35:02.814218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:58.333 [2024-11-20 08:35:02.814222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:58.333 [2024-11-20 08:35:02.868827] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:58.333 [2024-11-20 08:35:02.869337] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:58.333 [2024-11-20 08:35:02.869659] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:58.333 [2024-11-20 08:35:02.869926] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:58.905 08:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:58.905 08:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:58.905 08:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:37:58.905 08:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:58.905 08:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:58.905 08:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:58.906 08:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:59.166 [2024-11-20 08:35:03.670705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:59.166 08:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:59.427 08:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:59.427 08:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:59.427 08:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:59.427 08:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:59.688 08:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:59.950 08:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8a7a34a1-e6e3-438f-9782-e84b9cc6c888 00:37:59.950 08:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8a7a34a1-e6e3-438f-9782-e84b9cc6c888 lvol 20 00:37:59.950 08:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b816eb50-cc53-456a-90a4-5e3698414abe 00:37:59.950 08:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:00.210 08:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b816eb50-cc53-456a-90a4-5e3698414abe 00:38:00.211 08:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:00.472 [2024-11-20 08:35:05.042858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:00.472 08:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:00.732 08:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2255394 00:38:00.733 08:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:38:00.733 08:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:38:01.676 08:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b816eb50-cc53-456a-90a4-5e3698414abe MY_SNAPSHOT 00:38:01.936 08:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ce9a6e63-f705-4922-86e9-e023990e853b 00:38:01.936 08:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b816eb50-cc53-456a-90a4-5e3698414abe 30 00:38:02.198 08:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ce9a6e63-f705-4922-86e9-e023990e853b MY_CLONE 00:38:02.198 08:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0a97ff32-a248-4df7-8e9b-3d7ed54f598f 00:38:02.198 08:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0a97ff32-a248-4df7-8e9b-3d7ed54f598f 00:38:02.770 08:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2255394 00:38:12.763 Initializing NVMe Controllers 00:38:12.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:12.763 Controller IO queue size 128, less than required. 00:38:12.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:12.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:38:12.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:38:12.763 Initialization complete. Launching workers. 00:38:12.763 ======================================================== 00:38:12.763 Latency(us) 00:38:12.763 Device Information : IOPS MiB/s Average min max 00:38:12.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12309.40 48.08 10401.28 4365.84 54889.23 00:38:12.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15893.90 62.09 8052.26 2738.46 74133.94 00:38:12.763 ======================================================== 00:38:12.763 Total : 28203.30 110.17 9077.49 2738.46 74133.94 00:38:12.763 00:38:12.763 08:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:12.763 08:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b816eb50-cc53-456a-90a4-5e3698414abe 00:38:12.763 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8a7a34a1-e6e3-438f-9782-e84b9cc6c888 00:38:12.763 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:38:12.763 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:38:12.763 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:38:12.763 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:38:12.763 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:38:12.763 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:38:12.763 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:38:12.763 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:38:12.763 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:38:12.763 rmmod nvme_tcp 00:38:12.763 rmmod nvme_fabrics 00:38:12.763 rmmod nvme_keyring 00:38:12.763 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 2254998 ']' 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 2254998 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2254998 ']' 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2254998 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2254998 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2254998' 00:38:12.764 killing process with pid 2254998 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2254998 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2254998 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@254 -- # local dev 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@257 -- # remove_target_ns 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:12.764 08:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@258 -- # delete_main_bridge 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@121 -- # return 0 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@274 -- # iptr 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-save 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-restore 00:38:14.152 00:38:14.152 real 0m24.634s 00:38:14.152 user 0m56.091s 00:38:14.152 sys 0m11.165s 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:14.152 ************************************ 00:38:14.152 END TEST nvmf_lvol 00:38:14.152 ************************************ 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:14.152 ************************************ 00:38:14.152 START TEST nvmf_lvs_grow 00:38:14.152 ************************************ 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:14.152 * Looking for test storage... 00:38:14.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:38:14.152 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:14.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.153 --rc genhtml_branch_coverage=1 00:38:14.153 --rc genhtml_function_coverage=1 00:38:14.153 --rc genhtml_legend=1 00:38:14.153 --rc geninfo_all_blocks=1 00:38:14.153 --rc geninfo_unexecuted_blocks=1 00:38:14.153 00:38:14.153 ' 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:14.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.153 --rc genhtml_branch_coverage=1 00:38:14.153 --rc genhtml_function_coverage=1 00:38:14.153 --rc genhtml_legend=1 00:38:14.153 --rc geninfo_all_blocks=1 00:38:14.153 --rc geninfo_unexecuted_blocks=1 00:38:14.153 00:38:14.153 ' 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:14.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.153 --rc genhtml_branch_coverage=1 00:38:14.153 --rc genhtml_function_coverage=1 00:38:14.153 --rc genhtml_legend=1 00:38:14.153 --rc geninfo_all_blocks=1 00:38:14.153 --rc geninfo_unexecuted_blocks=1 00:38:14.153 00:38:14.153 ' 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:14.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.153 --rc genhtml_branch_coverage=1 00:38:14.153 --rc genhtml_function_coverage=1 00:38:14.153 --rc genhtml_legend=1 00:38:14.153 --rc geninfo_all_blocks=1 00:38:14.153 --rc geninfo_unexecuted_blocks=1 00:38:14.153 00:38:14.153 ' 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:38:14.153 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:38:14.154 08:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:22.463 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:22.463 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:22.463 Found net devices under 0000:31:00.0: cvl_0_0 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:22.463 Found net devices under 0000:31:00.1: cvl_0_1 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:38:22.463 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@247 -- # create_target_ns 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:38:22.464 08:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:38:22.464 10.0.0.1 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:38:22.464 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:38:22.758 10.0.0.2 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:38:22.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:22.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.453 ms 00:38:22.758 00:38:22.758 --- 10.0.0.1 ping statistics --- 00:38:22.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:22.758 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:22.758 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:38:22.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:22.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:38:22.759 00:38:22.759 --- 10.0.0.2 ping statistics --- 00:38:22.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:22.759 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:38:22.759 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:22.760 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:38:22.760 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:38:22.760 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:22.760 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:38:22.760 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:38:23.050 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:23.050 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:38:23.050 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:23.050 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:23.050 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=2262298 00:38:23.050 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 2262298 00:38:23.051 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:23.051 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2262298 ']' 00:38:23.051 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:23.051 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:23.051 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:23.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:23.051 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:23.051 08:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:23.051 [2024-11-20 08:35:27.550876] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:23.051 [2024-11-20 08:35:27.552066] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:38:23.051 [2024-11-20 08:35:27.552119] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:23.051 [2024-11-20 08:35:27.646414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.051 [2024-11-20 08:35:27.687029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:23.051 [2024-11-20 08:35:27.687067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:23.051 [2024-11-20 08:35:27.687075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:23.051 [2024-11-20 08:35:27.687082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:23.051 [2024-11-20 08:35:27.687088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:23.051 [2024-11-20 08:35:27.687674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.051 [2024-11-20 08:35:27.743227] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:23.051 [2024-11-20 08:35:27.743489] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:23.994 [2024-11-20 08:35:28.560419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:23.994 ************************************ 00:38:23.994 START TEST lvs_grow_clean 00:38:23.994 ************************************ 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:23.994 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:24.256 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:24.257 08:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:24.518 08:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=37ae3c3c-41e4-44dc-af4b-072efaa0f272 00:38:24.518 08:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37ae3c3c-41e4-44dc-af4b-072efaa0f272 00:38:24.518 08:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:24.518 08:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:24.518 08:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:24.518 08:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 37ae3c3c-41e4-44dc-af4b-072efaa0f272 lvol 150 00:38:24.780 08:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c737e796-a779-402e-83e1-d86d3ac46627 00:38:24.780 08:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:24.780 08:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:25.041 [2024-11-20 08:35:29.584067] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:25.041 [2024-11-20 08:35:29.584165] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:25.041 true 00:38:25.041 08:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37ae3c3c-41e4-44dc-af4b-072efaa0f272 00:38:25.041 08:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:25.303 08:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:25.303 08:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:25.303 08:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c737e796-a779-402e-83e1-d86d3ac46627 00:38:25.564 08:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:25.826 [2024-11-20 08:35:30.316760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:25.826 08:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:25.826 08:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2262809 00:38:25.826 08:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:25.826 08:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:25.826 08:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2262809 /var/tmp/bdevperf.sock 00:38:25.826 08:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2262809 ']' 00:38:25.826 08:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:25.826 08:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:25.826 08:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:25.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:25.826 08:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:25.826 08:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:25.826 [2024-11-20 08:35:30.552295] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:38:25.826 [2024-11-20 08:35:30.552356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2262809 ] 00:38:26.087 [2024-11-20 08:35:30.650860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.087 [2024-11-20 08:35:30.703206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:26.660 08:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:26.660 08:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:26.660 08:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:27.234 Nvme0n1 00:38:27.234 08:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:27.234 [ 00:38:27.234 { 00:38:27.234 "name": "Nvme0n1", 00:38:27.234 "aliases": [ 00:38:27.234 "c737e796-a779-402e-83e1-d86d3ac46627" 00:38:27.234 ], 00:38:27.234 "product_name": "NVMe disk", 00:38:27.234 "block_size": 4096, 00:38:27.234 "num_blocks": 38912, 00:38:27.234 "uuid": "c737e796-a779-402e-83e1-d86d3ac46627", 00:38:27.234 "numa_id": 0, 00:38:27.234 "assigned_rate_limits": { 00:38:27.234 "rw_ios_per_sec": 0, 00:38:27.234 "rw_mbytes_per_sec": 0, 00:38:27.234 "r_mbytes_per_sec": 0, 00:38:27.234 "w_mbytes_per_sec": 0 00:38:27.234 }, 00:38:27.234 "claimed": false, 00:38:27.234 "zoned": false, 00:38:27.234 "supported_io_types": { 00:38:27.234 "read": true, 00:38:27.234 "write": true, 00:38:27.234 "unmap": true, 00:38:27.234 "flush": true, 00:38:27.234 "reset": true, 00:38:27.234 "nvme_admin": true, 00:38:27.234 "nvme_io": true, 00:38:27.234 "nvme_io_md": false, 00:38:27.234 "write_zeroes": true, 00:38:27.234 "zcopy": false, 00:38:27.234 "get_zone_info": false, 00:38:27.234 "zone_management": false, 00:38:27.234 "zone_append": false, 00:38:27.234 "compare": true, 00:38:27.234 "compare_and_write": true, 00:38:27.234 "abort": true, 00:38:27.234 "seek_hole": false, 00:38:27.234 "seek_data": false, 00:38:27.234 "copy": true, 00:38:27.234 "nvme_iov_md": false 00:38:27.234 }, 00:38:27.234 "memory_domains": [ 00:38:27.234 { 00:38:27.234 "dma_device_id": "system", 00:38:27.234 "dma_device_type": 1 00:38:27.234 } 00:38:27.234 ], 00:38:27.234 "driver_specific": { 00:38:27.234 "nvme": [ 00:38:27.234 { 00:38:27.234 "trid": { 00:38:27.234 "trtype": "TCP", 00:38:27.234 "adrfam": "IPv4", 00:38:27.234 "traddr": "10.0.0.2", 00:38:27.234 "trsvcid": "4420", 00:38:27.234 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:27.234 }, 00:38:27.234 "ctrlr_data": { 00:38:27.234 "cntlid": 1, 00:38:27.234 "vendor_id": "0x8086", 00:38:27.234 "model_number": "SPDK bdev Controller", 00:38:27.234 "serial_number": "SPDK0", 00:38:27.234 "firmware_revision": "25.01", 00:38:27.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:27.234 "oacs": { 00:38:27.234 "security": 0, 00:38:27.234 "format": 0, 00:38:27.234 "firmware": 0, 00:38:27.234 "ns_manage": 0 00:38:27.234 }, 00:38:27.234 "multi_ctrlr": true, 00:38:27.234 "ana_reporting": false 00:38:27.234 }, 00:38:27.234 "vs": { 00:38:27.234 "nvme_version": "1.3" 00:38:27.234 }, 00:38:27.234 "ns_data": { 00:38:27.234 "id": 1, 00:38:27.234 "can_share": true 00:38:27.234 } 00:38:27.234 } 00:38:27.234 ], 00:38:27.234 "mp_policy": "active_passive" 00:38:27.234 } 00:38:27.234 } 00:38:27.234 ] 00:38:27.234 08:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:27.234 08:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2263144 00:38:27.234 08:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:27.495 Running I/O for 10 seconds... 00:38:28.437 Latency(us) 00:38:28.437 [2024-11-20T07:35:33.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:28.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:28.438 Nvme0n1 : 1.00 17663.00 69.00 0.00 0.00 0.00 0.00 0.00 00:38:28.438 [2024-11-20T07:35:33.167Z] =================================================================================================================== 00:38:28.438 [2024-11-20T07:35:33.167Z] Total : 17663.00 69.00 0.00 0.00 0.00 0.00 0.00 00:38:28.438 00:38:29.380 08:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 37ae3c3c-41e4-44dc-af4b-072efaa0f272 00:38:29.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:29.380 Nvme0n1 : 2.00 17848.50 69.72 0.00 0.00 0.00 0.00 0.00 00:38:29.380 [2024-11-20T07:35:34.109Z] =================================================================================================================== 00:38:29.380 [2024-11-20T07:35:34.109Z] Total : 17848.50 69.72 0.00 0.00 0.00 0.00 0.00 00:38:29.380 00:38:29.380 true 00:38:29.380 08:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37ae3c3c-41e4-44dc-af4b-072efaa0f272 00:38:29.640 08:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:29.640 08:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:29.640 08:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:29.640 08:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2263144 00:38:30.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:30.583 Nvme0n1 : 3.00 17895.00 69.90 0.00 0.00 0.00 0.00 0.00 00:38:30.583 [2024-11-20T07:35:35.312Z] =================================================================================================================== 00:38:30.583 [2024-11-20T07:35:35.312Z] Total : 17895.00 69.90 0.00 0.00 0.00 0.00 0.00 00:38:30.583 00:38:31.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:31.525 Nvme0n1 : 4.00 17941.25 70.08 0.00 0.00 0.00 0.00 0.00 00:38:31.525 [2024-11-20T07:35:36.254Z] =================================================================================================================== 00:38:31.525 [2024-11-20T07:35:36.254Z] Total : 17941.25 70.08 0.00 0.00 0.00 0.00 0.00 00:38:31.525 00:38:32.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:32.469 Nvme0n1 : 5.00 17959.80 70.16 0.00 0.00 0.00 0.00 0.00 00:38:32.469 [2024-11-20T07:35:37.198Z] =================================================================================================================== 00:38:32.469 [2024-11-20T07:35:37.198Z] Total : 17959.80 70.16 0.00 0.00 0.00 0.00 0.00 00:38:32.469 00:38:33.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:33.411 Nvme0n1 : 6.00 17985.67 70.26 0.00 0.00 0.00 0.00 0.00 00:38:33.411 [2024-11-20T07:35:38.140Z] =================================================================================================================== 00:38:33.411 [2024-11-20T07:35:38.140Z] Total : 17985.67 70.26 0.00 0.00 0.00 0.00 0.00 00:38:33.411 00:38:34.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:34.353 Nvme0n1 : 7.00 17999.14 70.31 0.00 0.00 0.00 0.00 0.00 00:38:34.353 [2024-11-20T07:35:39.082Z] =================================================================================================================== 00:38:34.353 [2024-11-20T07:35:39.082Z] Total : 17999.14 70.31 0.00 0.00 0.00 0.00 0.00 00:38:34.353 00:38:35.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:35.293 Nvme0n1 : 8.00 18019.38 70.39 0.00 0.00 0.00 0.00 0.00 00:38:35.293 [2024-11-20T07:35:40.022Z] =================================================================================================================== 00:38:35.293 [2024-11-20T07:35:40.022Z] Total : 18019.38 70.39 0.00 0.00 0.00 0.00 0.00 00:38:35.293 00:38:36.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:36.676 Nvme0n1 : 9.00 18021.00 70.39 0.00 0.00 0.00 0.00 0.00 00:38:36.676 [2024-11-20T07:35:41.405Z] =================================================================================================================== 00:38:36.676 [2024-11-20T07:35:41.405Z] Total : 18021.00 70.39 0.00 0.00 0.00 0.00 0.00 00:38:36.676 00:38:37.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:37.617 Nvme0n1 : 10.00 18035.00 70.45 0.00 0.00 0.00 0.00 0.00 00:38:37.617 [2024-11-20T07:35:42.346Z] =================================================================================================================== 00:38:37.617 [2024-11-20T07:35:42.346Z] Total : 18035.00 70.45 0.00 0.00 0.00 0.00 0.00 00:38:37.617 00:38:37.617 00:38:37.617 Latency(us) 00:38:37.617 [2024-11-20T07:35:42.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:37.617 Nvme0n1 : 10.01 18038.58 70.46 0.00 0.00 7093.36 2689.71 14090.24 00:38:37.617 [2024-11-20T07:35:42.346Z] =================================================================================================================== 00:38:37.617 [2024-11-20T07:35:42.346Z] Total : 18038.58 70.46 0.00 0.00 7093.36 2689.71 14090.24 00:38:37.617 { 00:38:37.617 "results": [ 00:38:37.617 { 00:38:37.617 "job": "Nvme0n1", 00:38:37.617 "core_mask": "0x2", 00:38:37.617 "workload": "randwrite", 00:38:37.617 "status": "finished", 00:38:37.617 "queue_depth": 128, 00:38:37.617 "io_size": 4096, 00:38:37.617 "runtime": 10.005114, 00:38:37.617 "iops": 18038.575072707816, 00:38:37.617 "mibps": 70.4631838777649, 00:38:37.617 "io_failed": 0, 00:38:37.617 "io_timeout": 0, 00:38:37.617 "avg_latency_us": 7093.360950660653, 00:38:37.617 "min_latency_us": 2689.7066666666665, 00:38:37.617 "max_latency_us": 14090.24 00:38:37.617 } 00:38:37.617 ], 00:38:37.617 "core_count": 1 00:38:37.617 } 00:38:37.617 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2262809 00:38:37.617 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2262809 ']' 00:38:37.617 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2262809 00:38:37.617 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:37.617 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:37.617 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2262809 00:38:37.617 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:37.617 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:37.617 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2262809' 00:38:37.617 killing process with pid 2262809 00:38:37.617 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2262809 00:38:37.617 Received shutdown signal, test time was about 10.000000 seconds 00:38:37.617 00:38:37.617 Latency(us) 00:38:37.617 [2024-11-20T07:35:42.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.617 [2024-11-20T07:35:42.346Z] =================================================================================================================== 00:38:37.617 [2024-11-20T07:35:42.346Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:37.617 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2262809 00:38:37.617 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:37.878 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:37.878 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37ae3c3c-41e4-44dc-af4b-072efaa0f272 00:38:37.878 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:38.138 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:38.138 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:38.138 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:38.397 [2024-11-20 08:35:42.876109] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:38.397 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37ae3c3c-41e4-44dc-af4b-072efaa0f272 00:38:38.397 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:38:38.398 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37ae3c3c-41e4-44dc-af4b-072efaa0f272 00:38:38.398 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:38.398 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:38.398 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:38.398 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:38.398 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:38.398 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:38.398 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:38.398 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:38.398 08:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37ae3c3c-41e4-44dc-af4b-072efaa0f272 00:38:38.398 request: 00:38:38.398 { 00:38:38.398 "uuid": "37ae3c3c-41e4-44dc-af4b-072efaa0f272", 00:38:38.398 "method": "bdev_lvol_get_lvstores", 00:38:38.398 "req_id": 1 00:38:38.398 } 00:38:38.398 Got JSON-RPC error response 00:38:38.398 response: 00:38:38.398 { 00:38:38.398 "code": -19, 00:38:38.398 "message": "No such device" 00:38:38.398 } 00:38:38.398 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:38:38.398 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:38.398 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:38.398 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:38.398 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:38.657 aio_bdev 00:38:38.657 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c737e796-a779-402e-83e1-d86d3ac46627 00:38:38.657 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c737e796-a779-402e-83e1-d86d3ac46627 00:38:38.657 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:38.658 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:38:38.658 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:38.658 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:38.658 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:38.919 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c737e796-a779-402e-83e1-d86d3ac46627 -t 2000 00:38:38.919 [ 00:38:38.919 { 00:38:38.919 "name": "c737e796-a779-402e-83e1-d86d3ac46627", 00:38:38.919 "aliases": [ 00:38:38.919 "lvs/lvol" 00:38:38.919 ], 00:38:38.919 "product_name": "Logical Volume", 00:38:38.919 "block_size": 4096, 00:38:38.919 "num_blocks": 38912, 00:38:38.919 "uuid": "c737e796-a779-402e-83e1-d86d3ac46627", 00:38:38.919 "assigned_rate_limits": { 00:38:38.919 "rw_ios_per_sec": 0, 00:38:38.919 "rw_mbytes_per_sec": 0, 00:38:38.919 "r_mbytes_per_sec": 0, 00:38:38.919 "w_mbytes_per_sec": 0 00:38:38.919 }, 00:38:38.919 "claimed": false, 00:38:38.919 "zoned": false, 00:38:38.919 "supported_io_types": { 00:38:38.919 "read": true, 00:38:38.919 "write": true, 00:38:38.919 "unmap": true, 00:38:38.919 "flush": false, 00:38:38.919 "reset": true, 00:38:38.919 "nvme_admin": false, 00:38:38.919 "nvme_io": false, 00:38:38.919 "nvme_io_md": false, 00:38:38.919 "write_zeroes": true, 00:38:38.919 "zcopy": false, 00:38:38.919 "get_zone_info": false, 00:38:38.919 "zone_management": false, 00:38:38.919 "zone_append": false, 00:38:38.919 "compare": false, 00:38:38.919 "compare_and_write": false, 00:38:38.919 "abort": false, 00:38:38.919 "seek_hole": true, 00:38:38.919 "seek_data": true, 00:38:38.919 "copy": false, 00:38:38.919 "nvme_iov_md": false 00:38:38.919 }, 00:38:38.919 "driver_specific": { 00:38:38.919 "lvol": { 00:38:38.919 "lvol_store_uuid": "37ae3c3c-41e4-44dc-af4b-072efaa0f272", 00:38:38.919 "base_bdev": "aio_bdev", 00:38:38.919 "thin_provision": false, 00:38:38.919 "num_allocated_clusters": 38, 00:38:38.919 "snapshot": false, 00:38:38.919 "clone": false, 00:38:38.919 "esnap_clone": false 00:38:38.919 } 00:38:38.919 } 00:38:38.919 } 00:38:38.919 ] 00:38:38.919 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:38:38.919 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37ae3c3c-41e4-44dc-af4b-072efaa0f272 00:38:38.919 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:39.179 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:39.179 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37ae3c3c-41e4-44dc-af4b-072efaa0f272 00:38:39.179 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:39.440 08:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:39.440 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c737e796-a779-402e-83e1-d86d3ac46627 00:38:39.699 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37ae3c3c-41e4-44dc-af4b-072efaa0f272 00:38:39.699 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:39.960 00:38:39.960 real 0m15.936s 00:38:39.960 user 0m15.554s 00:38:39.960 sys 0m1.484s 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:39.960 ************************************ 00:38:39.960 END TEST lvs_grow_clean 00:38:39.960 ************************************ 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:39.960 ************************************ 00:38:39.960 START TEST lvs_grow_dirty 00:38:39.960 ************************************ 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:39.960 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:40.221 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:40.221 08:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:40.481 08:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cf11af45-2b4c-4f4f-b9f0-27ef688c51fd 00:38:40.481 08:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf11af45-2b4c-4f4f-b9f0-27ef688c51fd 00:38:40.481 08:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:40.481 08:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:40.481 08:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:40.742 08:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cf11af45-2b4c-4f4f-b9f0-27ef688c51fd lvol 150 00:38:40.742 08:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ff5e26ad-5b8c-4425-aa42-a4454a116236 00:38:40.742 08:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:40.742 08:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:41.003 [2024-11-20 08:35:45.552155] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:41.003 [2024-11-20 08:35:45.552315] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:41.003 true 00:38:41.003 08:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:41.003 08:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf11af45-2b4c-4f4f-b9f0-27ef688c51fd 00:38:41.265 08:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:41.265 08:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:41.265 08:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ff5e26ad-5b8c-4425-aa42-a4454a116236 00:38:41.526 08:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:41.787 [2024-11-20 08:35:46.288694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:41.787 08:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:41.787 08:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2265889 00:38:41.787 08:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:41.787 08:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:41.787 08:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2265889 /var/tmp/bdevperf.sock 00:38:41.787 08:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2265889 ']' 00:38:41.787 08:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:41.787 08:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:41.787 08:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:41.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:41.787 08:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:41.787 08:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:42.047 [2024-11-20 08:35:46.539821] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:38:42.047 [2024-11-20 08:35:46.539888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265889 ] 00:38:42.047 [2024-11-20 08:35:46.631963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:42.047 [2024-11-20 08:35:46.663463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:42.619 08:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:42.619 08:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:42.619 08:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:43.191 Nvme0n1 00:38:43.191 08:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:43.191 [ 00:38:43.191 { 00:38:43.191 "name": "Nvme0n1", 00:38:43.191 "aliases": [ 00:38:43.191 "ff5e26ad-5b8c-4425-aa42-a4454a116236" 00:38:43.191 ], 00:38:43.191 "product_name": "NVMe disk", 00:38:43.191 "block_size": 4096, 00:38:43.191 "num_blocks": 38912, 00:38:43.191 "uuid": "ff5e26ad-5b8c-4425-aa42-a4454a116236", 00:38:43.191 "numa_id": 0, 00:38:43.191 "assigned_rate_limits": { 00:38:43.191 "rw_ios_per_sec": 0, 00:38:43.191 "rw_mbytes_per_sec": 0, 00:38:43.191 "r_mbytes_per_sec": 0, 00:38:43.191 "w_mbytes_per_sec": 0 00:38:43.191 }, 00:38:43.191 "claimed": false, 00:38:43.191 "zoned": false, 00:38:43.191 "supported_io_types": { 00:38:43.191 "read": true, 00:38:43.191 "write": true, 00:38:43.191 "unmap": true, 00:38:43.191 "flush": true, 00:38:43.191 "reset": true, 00:38:43.191 "nvme_admin": true, 00:38:43.191 "nvme_io": true, 00:38:43.191 "nvme_io_md": false, 00:38:43.191 "write_zeroes": true, 00:38:43.191 "zcopy": false, 00:38:43.191 "get_zone_info": false, 00:38:43.191 "zone_management": false, 00:38:43.191 "zone_append": false, 00:38:43.191 "compare": true, 00:38:43.191 "compare_and_write": true, 00:38:43.191 "abort": true, 00:38:43.191 "seek_hole": false, 00:38:43.191 "seek_data": false, 00:38:43.191 "copy": true, 00:38:43.191 "nvme_iov_md": false 00:38:43.191 }, 00:38:43.191 "memory_domains": [ 00:38:43.191 { 00:38:43.191 "dma_device_id": "system", 00:38:43.191 "dma_device_type": 1 00:38:43.191 } 00:38:43.191 ], 00:38:43.191 "driver_specific": { 00:38:43.191 "nvme": [ 00:38:43.191 { 00:38:43.191 "trid": { 00:38:43.191 "trtype": "TCP", 00:38:43.191 "adrfam": "IPv4", 00:38:43.191 "traddr": "10.0.0.2", 00:38:43.191 "trsvcid": "4420", 00:38:43.191 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:43.191 }, 00:38:43.191 "ctrlr_data": { 00:38:43.191 "cntlid": 1, 00:38:43.191 "vendor_id": "0x8086", 00:38:43.191 "model_number": "SPDK bdev Controller", 00:38:43.191 "serial_number": "SPDK0", 00:38:43.191 "firmware_revision": "25.01", 00:38:43.191 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:43.191 "oacs": { 00:38:43.191 "security": 0, 00:38:43.191 "format": 0, 00:38:43.191 "firmware": 0, 00:38:43.191 "ns_manage": 0 00:38:43.191 }, 00:38:43.191 "multi_ctrlr": true, 00:38:43.191 "ana_reporting": false 00:38:43.191 }, 00:38:43.191 "vs": { 00:38:43.191 "nvme_version": "1.3" 00:38:43.191 }, 00:38:43.191 "ns_data": { 00:38:43.191 "id": 1, 00:38:43.191 "can_share": true 00:38:43.191 } 00:38:43.191 } 00:38:43.191 ], 00:38:43.191 "mp_policy": "active_passive" 00:38:43.191 } 00:38:43.191 } 00:38:43.191 ] 00:38:43.191 08:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2266220 00:38:43.191 08:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:43.191 08:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:43.451 Running I/O for 10 seconds... 00:38:44.392 Latency(us) 00:38:44.392 [2024-11-20T07:35:49.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:44.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:44.392 Nvme0n1 : 1.00 17781.00 69.46 0.00 0.00 0.00 0.00 0.00 00:38:44.392 [2024-11-20T07:35:49.121Z] =================================================================================================================== 00:38:44.392 [2024-11-20T07:35:49.121Z] Total : 17781.00 69.46 0.00 0.00 0.00 0.00 0.00 00:38:44.392 00:38:45.360 08:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cf11af45-2b4c-4f4f-b9f0-27ef688c51fd 00:38:45.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:45.360 Nvme0n1 : 2.00 17907.50 69.95 0.00 0.00 0.00 0.00 0.00 00:38:45.360 [2024-11-20T07:35:50.089Z] =================================================================================================================== 00:38:45.360 [2024-11-20T07:35:50.089Z] Total : 17907.50 69.95 0.00 0.00 0.00 0.00 0.00 00:38:45.360 00:38:45.360 true 00:38:45.360 08:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf11af45-2b4c-4f4f-b9f0-27ef688c51fd 00:38:45.360 08:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:45.621 08:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:45.621 08:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:45.621 08:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2266220 00:38:46.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:46.562 Nvme0n1 : 3.00 17907.33 69.95 0.00 0.00 0.00 0.00 0.00 00:38:46.562 [2024-11-20T07:35:51.291Z] =================================================================================================================== 00:38:46.562 [2024-11-20T07:35:51.291Z] Total : 17907.33 69.95 0.00 0.00 0.00 0.00 0.00 00:38:46.562 00:38:47.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:47.503 Nvme0n1 : 4.00 17943.25 70.09 0.00 0.00 0.00 0.00 0.00 00:38:47.503 [2024-11-20T07:35:52.232Z] =================================================================================================================== 00:38:47.503 [2024-11-20T07:35:52.232Z] Total : 17943.25 70.09 0.00 0.00 0.00 0.00 0.00 00:38:47.503 00:38:48.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:48.445 Nvme0n1 : 5.00 17986.80 70.26 0.00 0.00 0.00 0.00 0.00 00:38:48.445 [2024-11-20T07:35:53.174Z] =================================================================================================================== 00:38:48.445 [2024-11-20T07:35:53.174Z] Total : 17986.80 70.26 0.00 0.00 0.00 0.00 0.00 00:38:48.445 00:38:49.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:49.385 Nvme0n1 : 6.00 17994.67 70.29 0.00 0.00 0.00 0.00 0.00 00:38:49.385 [2024-11-20T07:35:54.114Z] =================================================================================================================== 00:38:49.385 [2024-11-20T07:35:54.114Z] Total : 17994.67 70.29 0.00 0.00 0.00 0.00 0.00 00:38:49.385 00:38:50.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:50.326 Nvme0n1 : 7.00 18018.43 70.38 0.00 0.00 0.00 0.00 0.00 00:38:50.326 [2024-11-20T07:35:55.055Z] =================================================================================================================== 00:38:50.326 [2024-11-20T07:35:55.055Z] Total : 18018.43 70.38 0.00 0.00 0.00 0.00 0.00 00:38:50.326 00:38:51.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:51.266 Nvme0n1 : 8.00 18036.25 70.45 0.00 0.00 0.00 0.00 0.00 00:38:51.266 [2024-11-20T07:35:55.996Z] =================================================================================================================== 00:38:51.267 [2024-11-20T07:35:55.996Z] Total : 18036.25 70.45 0.00 0.00 0.00 0.00 0.00 00:38:51.267 00:38:52.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:52.651 Nvme0n1 : 9.00 18050.11 70.51 0.00 0.00 0.00 0.00 0.00 00:38:52.651 [2024-11-20T07:35:57.380Z] =================================================================================================================== 00:38:52.651 [2024-11-20T07:35:57.380Z] Total : 18050.11 70.51 0.00 0.00 0.00 0.00 0.00 00:38:52.651 00:38:53.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:53.593 Nvme0n1 : 10.00 18061.20 70.55 0.00 0.00 0.00 0.00 0.00 00:38:53.593 [2024-11-20T07:35:58.322Z] =================================================================================================================== 00:38:53.593 [2024-11-20T07:35:58.322Z] Total : 18061.20 70.55 0.00 0.00 0.00 0.00 0.00 00:38:53.593 00:38:53.593 00:38:53.593 Latency(us) 00:38:53.593 [2024-11-20T07:35:58.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:53.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:53.593 Nvme0n1 : 10.00 18060.19 70.55 0.00 0.00 7084.77 1686.19 13817.17 00:38:53.593 [2024-11-20T07:35:58.322Z] =================================================================================================================== 00:38:53.593 [2024-11-20T07:35:58.322Z] Total : 18060.19 70.55 0.00 0.00 7084.77 1686.19 13817.17 00:38:53.593 { 00:38:53.593 "results": [ 00:38:53.593 { 00:38:53.593 "job": "Nvme0n1", 00:38:53.593 "core_mask": "0x2", 00:38:53.593 "workload": "randwrite", 00:38:53.593 "status": "finished", 00:38:53.593 "queue_depth": 128, 00:38:53.593 "io_size": 4096, 00:38:53.593 "runtime": 10.004158, 00:38:53.593 "iops": 18060.190572759846, 00:38:53.593 "mibps": 70.54761942484315, 00:38:53.593 "io_failed": 0, 00:38:53.593 "io_timeout": 0, 00:38:53.593 "avg_latency_us": 7084.767900286146, 00:38:53.593 "min_latency_us": 1686.1866666666667, 00:38:53.593 "max_latency_us": 13817.173333333334 00:38:53.593 } 00:38:53.593 ], 00:38:53.593 "core_count": 1 00:38:53.593 } 00:38:53.593 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2265889 00:38:53.593 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2265889 ']' 00:38:53.593 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2265889 00:38:53.593 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:53.593 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:53.593 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2265889 00:38:53.593 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:53.593 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:53.593 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2265889' 00:38:53.593 killing process with pid 2265889 00:38:53.593 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2265889 00:38:53.593 Received shutdown signal, test time was about 10.000000 seconds 00:38:53.593 00:38:53.593 Latency(us) 00:38:53.593 [2024-11-20T07:35:58.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:53.593 [2024-11-20T07:35:58.322Z] =================================================================================================================== 00:38:53.593 [2024-11-20T07:35:58.322Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:53.594 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2265889 00:38:53.594 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:53.855 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf11af45-2b4c-4f4f-b9f0-27ef688c51fd 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2262298 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2262298 00:38:54.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2262298 Killed "${NVMF_APP[@]}" "$@" 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=2268240 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 2268240 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2268240 ']' 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:54.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:54.117 08:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:54.378 [2024-11-20 08:35:58.885940] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:54.378 [2024-11-20 08:35:58.887652] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:38:54.378 [2024-11-20 08:35:58.887726] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:54.378 [2024-11-20 08:35:58.975186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.378 [2024-11-20 08:35:59.010302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:54.378 [2024-11-20 08:35:59.010336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:54.378 [2024-11-20 08:35:59.010344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:54.378 [2024-11-20 08:35:59.010351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:54.378 [2024-11-20 08:35:59.010356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:54.378 [2024-11-20 08:35:59.010888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.378 [2024-11-20 08:35:59.064959] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:54.378 [2024-11-20 08:35:59.065213] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:54.967 08:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:54.967 08:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:54.967 08:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:38:54.967 08:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:54.967 08:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:55.228 08:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:55.228 08:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:55.228 [2024-11-20 08:35:59.877840] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:55.228 [2024-11-20 08:35:59.877985] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:55.228 [2024-11-20 08:35:59.878020] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:55.228 08:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:55.228 08:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ff5e26ad-5b8c-4425-aa42-a4454a116236 00:38:55.228 08:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ff5e26ad-5b8c-4425-aa42-a4454a116236 00:38:55.228 08:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:55.228 08:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:55.228 08:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:55.228 08:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:55.228 08:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:55.489 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ff5e26ad-5b8c-4425-aa42-a4454a116236 -t 2000 00:38:55.750 [ 00:38:55.750 { 00:38:55.750 "name": "ff5e26ad-5b8c-4425-aa42-a4454a116236", 00:38:55.750 "aliases": [ 00:38:55.750 "lvs/lvol" 00:38:55.750 ], 00:38:55.750 "product_name": "Logical Volume", 00:38:55.750 "block_size": 4096, 00:38:55.750 "num_blocks": 38912, 00:38:55.750 "uuid": "ff5e26ad-5b8c-4425-aa42-a4454a116236", 00:38:55.750 "assigned_rate_limits": { 00:38:55.750 "rw_ios_per_sec": 0, 00:38:55.750 "rw_mbytes_per_sec": 0, 00:38:55.750 "r_mbytes_per_sec": 0, 00:38:55.750 "w_mbytes_per_sec": 0 00:38:55.750 }, 00:38:55.750 "claimed": false, 00:38:55.750 "zoned": false, 00:38:55.750 "supported_io_types": { 00:38:55.750 "read": true, 00:38:55.750 "write": true, 00:38:55.750 "unmap": true, 00:38:55.750 "flush": false, 00:38:55.750 "reset": true, 00:38:55.750 "nvme_admin": false, 00:38:55.750 "nvme_io": false, 00:38:55.750 "nvme_io_md": false, 00:38:55.750 "write_zeroes": true, 00:38:55.750 "zcopy": false, 00:38:55.750 "get_zone_info": false, 00:38:55.750 "zone_management": false, 00:38:55.750 "zone_append": false, 00:38:55.750 "compare": false, 00:38:55.750 "compare_and_write": false, 00:38:55.750 "abort": false, 00:38:55.750 "seek_hole": true, 00:38:55.750 "seek_data": true, 00:38:55.750 "copy": false, 00:38:55.750 "nvme_iov_md": false 00:38:55.750 }, 00:38:55.750 "driver_specific": { 00:38:55.750 "lvol": { 00:38:55.750 "lvol_store_uuid": "cf11af45-2b4c-4f4f-b9f0-27ef688c51fd", 00:38:55.750 "base_bdev": "aio_bdev", 00:38:55.750 "thin_provision": false, 00:38:55.750 "num_allocated_clusters": 38, 00:38:55.750 "snapshot": false, 00:38:55.750 "clone": false, 00:38:55.750 "esnap_clone": false 00:38:55.750 } 00:38:55.750 } 00:38:55.750 } 00:38:55.750 ] 00:38:55.750 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:55.750 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:55.750 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf11af45-2b4c-4f4f-b9f0-27ef688c51fd 00:38:55.750 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:55.750 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf11af45-2b4c-4f4f-b9f0-27ef688c51fd 00:38:55.750 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:56.012 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:56.012 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:56.275 [2024-11-20 08:36:00.775444] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf11af45-2b4c-4f4f-b9f0-27ef688c51fd 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf11af45-2b4c-4f4f-b9f0-27ef688c51fd 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf11af45-2b4c-4f4f-b9f0-27ef688c51fd 00:38:56.275 request: 00:38:56.275 { 00:38:56.275 "uuid": "cf11af45-2b4c-4f4f-b9f0-27ef688c51fd", 00:38:56.275 "method": "bdev_lvol_get_lvstores", 00:38:56.275 "req_id": 1 00:38:56.275 } 00:38:56.275 Got JSON-RPC error response 00:38:56.275 response: 00:38:56.275 { 00:38:56.275 "code": -19, 00:38:56.275 "message": "No such device" 00:38:56.275 } 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:56.275 08:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:56.536 aio_bdev 00:38:56.536 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ff5e26ad-5b8c-4425-aa42-a4454a116236 00:38:56.536 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ff5e26ad-5b8c-4425-aa42-a4454a116236 00:38:56.536 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:56.536 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:56.536 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:56.536 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:56.536 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:56.798 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ff5e26ad-5b8c-4425-aa42-a4454a116236 -t 2000 00:38:56.798 [ 00:38:56.798 { 00:38:56.798 "name": "ff5e26ad-5b8c-4425-aa42-a4454a116236", 00:38:56.798 "aliases": [ 00:38:56.798 "lvs/lvol" 00:38:56.798 ], 00:38:56.798 "product_name": "Logical Volume", 00:38:56.798 "block_size": 4096, 00:38:56.798 "num_blocks": 38912, 00:38:56.798 "uuid": "ff5e26ad-5b8c-4425-aa42-a4454a116236", 00:38:56.798 "assigned_rate_limits": { 00:38:56.798 "rw_ios_per_sec": 0, 00:38:56.798 "rw_mbytes_per_sec": 0, 00:38:56.798 "r_mbytes_per_sec": 0, 00:38:56.798 "w_mbytes_per_sec": 0 00:38:56.798 }, 00:38:56.798 "claimed": false, 00:38:56.798 "zoned": false, 00:38:56.798 "supported_io_types": { 00:38:56.798 "read": true, 00:38:56.798 "write": true, 00:38:56.798 "unmap": true, 00:38:56.798 "flush": false, 00:38:56.798 "reset": true, 00:38:56.798 "nvme_admin": false, 00:38:56.798 "nvme_io": false, 00:38:56.798 "nvme_io_md": false, 00:38:56.798 "write_zeroes": true, 00:38:56.798 "zcopy": false, 00:38:56.798 "get_zone_info": false, 00:38:56.798 "zone_management": false, 00:38:56.798 "zone_append": false, 00:38:56.798 "compare": false, 00:38:56.798 "compare_and_write": false, 00:38:56.798 "abort": false, 00:38:56.798 "seek_hole": true, 00:38:56.798 "seek_data": true, 00:38:56.798 "copy": false, 00:38:56.798 "nvme_iov_md": false 00:38:56.798 }, 00:38:56.798 "driver_specific": { 00:38:56.798 "lvol": { 00:38:56.798 "lvol_store_uuid": "cf11af45-2b4c-4f4f-b9f0-27ef688c51fd", 00:38:56.798 "base_bdev": "aio_bdev", 00:38:56.798 "thin_provision": false, 00:38:56.798 "num_allocated_clusters": 38, 00:38:56.798 "snapshot": false, 00:38:56.798 "clone": false, 00:38:56.798 "esnap_clone": false 00:38:56.798 } 00:38:56.798 } 00:38:56.798 } 00:38:56.798 ] 00:38:57.058 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:57.059 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf11af45-2b4c-4f4f-b9f0-27ef688c51fd 00:38:57.059 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:57.059 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:57.059 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf11af45-2b4c-4f4f-b9f0-27ef688c51fd 00:38:57.059 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:57.320 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:57.320 08:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ff5e26ad-5b8c-4425-aa42-a4454a116236 00:38:57.581 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cf11af45-2b4c-4f4f-b9f0-27ef688c51fd 00:38:57.581 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:57.842 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:57.842 00:38:57.842 real 0m17.824s 00:38:57.842 user 0m35.679s 00:38:57.842 sys 0m3.003s 00:38:57.842 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:57.842 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:57.842 ************************************ 00:38:57.842 END TEST lvs_grow_dirty 00:38:57.842 ************************************ 00:38:57.842 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:57.842 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:38:57.842 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:38:57.842 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:57.842 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:57.842 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:57.842 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:57.842 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:57.842 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:57.843 nvmf_trace.0 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:38:58.103 rmmod nvme_tcp 00:38:58.103 rmmod nvme_fabrics 00:38:58.103 rmmod nvme_keyring 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 2268240 ']' 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 2268240 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2268240 ']' 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2268240 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2268240 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2268240' 00:38:58.103 killing process with pid 2268240 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2268240 00:38:58.103 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2268240 00:38:58.364 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:38:58.364 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:38:58.364 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@254 -- # local dev 00:38:58.364 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # remove_target_ns 00:38:58.364 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:58.364 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:58.364 08:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # return 0 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@274 -- # iptr 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-save 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-restore 00:39:00.278 00:39:00.278 real 0m46.287s 00:39:00.278 user 0m54.553s 00:39:00.278 sys 0m11.403s 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:00.278 ************************************ 00:39:00.278 END TEST nvmf_lvs_grow 00:39:00.278 ************************************ 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@24 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:00.278 08:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:00.278 ************************************ 00:39:00.278 START TEST nvmf_bdev_io_wait 00:39:00.278 ************************************ 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:00.542 * Looking for test storage... 00:39:00.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:00.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.542 --rc genhtml_branch_coverage=1 00:39:00.542 --rc genhtml_function_coverage=1 00:39:00.542 --rc genhtml_legend=1 00:39:00.542 --rc geninfo_all_blocks=1 00:39:00.542 --rc geninfo_unexecuted_blocks=1 00:39:00.542 00:39:00.542 ' 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:00.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.542 --rc genhtml_branch_coverage=1 00:39:00.542 --rc genhtml_function_coverage=1 00:39:00.542 --rc genhtml_legend=1 00:39:00.542 --rc geninfo_all_blocks=1 00:39:00.542 --rc geninfo_unexecuted_blocks=1 00:39:00.542 00:39:00.542 ' 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:00.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.542 --rc genhtml_branch_coverage=1 00:39:00.542 --rc genhtml_function_coverage=1 00:39:00.542 --rc genhtml_legend=1 00:39:00.542 --rc geninfo_all_blocks=1 00:39:00.542 --rc geninfo_unexecuted_blocks=1 00:39:00.542 00:39:00.542 ' 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:00.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.542 --rc genhtml_branch_coverage=1 00:39:00.542 --rc genhtml_function_coverage=1 00:39:00.542 --rc genhtml_legend=1 00:39:00.542 --rc geninfo_all_blocks=1 00:39:00.542 --rc geninfo_unexecuted_blocks=1 00:39:00.542 00:39:00.542 ' 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:00.542 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:39:00.543 08:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:08.692 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:08.692 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:08.954 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:08.954 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:08.954 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:08.954 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:08.954 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:08.954 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:08.954 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:08.954 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:08.954 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:39:08.954 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:39:08.954 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:39:08.954 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:08.954 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:08.954 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:08.954 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:08.954 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:08.955 Found net devices under 0000:31:00.0: cvl_0_0 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:08.955 Found net devices under 0000:31:00.1: cvl_0_1 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@247 -- # create_target_ns 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:39:08.955 10.0.0.1 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:39:08.955 10.0.0.2 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:39:08.955 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:39:09.250 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:39:09.250 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:39:09.250 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:39:09.250 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:39:09.250 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:39:09.250 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:39:09.250 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:39:09.250 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:39:09.250 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:09.250 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:39:09.250 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:39:09.250 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:39:09.250 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:09.250 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:39:09.250 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:39:09.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:09.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.512 ms 00:39:09.251 00:39:09.251 --- 10.0.0.1 ping statistics --- 00:39:09.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:09.251 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:39:09.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:09.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:39:09.251 00:39:09.251 --- 10.0.0.2 ping statistics --- 00:39:09.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:09.251 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:09.251 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=2274229 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 2274229 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2274229 ']' 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:09.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:09.252 08:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:09.550 [2024-11-20 08:36:13.985770] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:09.550 [2024-11-20 08:36:13.986918] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:39:09.550 [2024-11-20 08:36:13.986972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:09.550 [2024-11-20 08:36:14.077545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:09.550 [2024-11-20 08:36:14.119948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:09.550 [2024-11-20 08:36:14.119986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:09.550 [2024-11-20 08:36:14.119994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:09.550 [2024-11-20 08:36:14.120000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:09.550 [2024-11-20 08:36:14.120006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:09.550 [2024-11-20 08:36:14.121587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:09.550 [2024-11-20 08:36:14.121710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:09.550 [2024-11-20 08:36:14.121898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:09.550 [2024-11-20 08:36:14.121926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:09.550 [2024-11-20 08:36:14.122312] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:10.163 [2024-11-20 08:36:14.867883] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:10.163 [2024-11-20 08:36:14.868490] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:10.163 [2024-11-20 08:36:14.868962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:10.163 [2024-11-20 08:36:14.869173] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.163 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:10.163 [2024-11-20 08:36:14.878483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:10.424 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:10.425 Malloc0 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:10.425 [2024-11-20 08:36:14.938681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2274468 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2274471 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:39:10.425 { 00:39:10.425 "params": { 00:39:10.425 "name": "Nvme$subsystem", 00:39:10.425 "trtype": "$TEST_TRANSPORT", 00:39:10.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:10.425 "adrfam": "ipv4", 00:39:10.425 "trsvcid": "$NVMF_PORT", 00:39:10.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:10.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:10.425 "hdgst": ${hdgst:-false}, 00:39:10.425 "ddgst": ${ddgst:-false} 00:39:10.425 }, 00:39:10.425 "method": "bdev_nvme_attach_controller" 00:39:10.425 } 00:39:10.425 EOF 00:39:10.425 )") 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2274475 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2274477 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:39:10.425 { 00:39:10.425 "params": { 00:39:10.425 "name": "Nvme$subsystem", 00:39:10.425 "trtype": "$TEST_TRANSPORT", 00:39:10.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:10.425 "adrfam": "ipv4", 00:39:10.425 "trsvcid": "$NVMF_PORT", 00:39:10.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:10.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:10.425 "hdgst": ${hdgst:-false}, 00:39:10.425 "ddgst": ${ddgst:-false} 00:39:10.425 }, 00:39:10.425 "method": "bdev_nvme_attach_controller" 00:39:10.425 } 00:39:10.425 EOF 00:39:10.425 )") 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:39:10.425 { 00:39:10.425 "params": { 00:39:10.425 "name": "Nvme$subsystem", 00:39:10.425 "trtype": "$TEST_TRANSPORT", 00:39:10.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:10.425 "adrfam": "ipv4", 00:39:10.425 "trsvcid": "$NVMF_PORT", 00:39:10.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:10.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:10.425 "hdgst": ${hdgst:-false}, 00:39:10.425 "ddgst": ${ddgst:-false} 00:39:10.425 }, 00:39:10.425 "method": "bdev_nvme_attach_controller" 00:39:10.425 } 00:39:10.425 EOF 00:39:10.425 )") 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:39:10.425 { 00:39:10.425 "params": { 00:39:10.425 "name": "Nvme$subsystem", 00:39:10.425 "trtype": "$TEST_TRANSPORT", 00:39:10.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:10.425 "adrfam": "ipv4", 00:39:10.425 "trsvcid": "$NVMF_PORT", 00:39:10.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:10.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:10.425 "hdgst": ${hdgst:-false}, 00:39:10.425 "ddgst": ${ddgst:-false} 00:39:10.425 }, 00:39:10.425 "method": "bdev_nvme_attach_controller" 00:39:10.425 } 00:39:10.425 EOF 00:39:10.425 )") 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2274468 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:39:10.425 "params": { 00:39:10.425 "name": "Nvme1", 00:39:10.425 "trtype": "tcp", 00:39:10.425 "traddr": "10.0.0.2", 00:39:10.425 "adrfam": "ipv4", 00:39:10.425 "trsvcid": "4420", 00:39:10.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:10.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:10.425 "hdgst": false, 00:39:10.425 "ddgst": false 00:39:10.425 }, 00:39:10.425 "method": "bdev_nvme_attach_controller" 00:39:10.425 }' 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:39:10.425 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:39:10.425 "params": { 00:39:10.425 "name": "Nvme1", 00:39:10.425 "trtype": "tcp", 00:39:10.425 "traddr": "10.0.0.2", 00:39:10.426 "adrfam": "ipv4", 00:39:10.426 "trsvcid": "4420", 00:39:10.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:10.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:10.426 "hdgst": false, 00:39:10.426 "ddgst": false 00:39:10.426 }, 00:39:10.426 "method": "bdev_nvme_attach_controller" 00:39:10.426 }' 00:39:10.426 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:39:10.426 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:39:10.426 "params": { 00:39:10.426 "name": "Nvme1", 00:39:10.426 "trtype": "tcp", 00:39:10.426 "traddr": "10.0.0.2", 00:39:10.426 "adrfam": "ipv4", 00:39:10.426 "trsvcid": "4420", 00:39:10.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:10.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:10.426 "hdgst": false, 00:39:10.426 "ddgst": false 00:39:10.426 }, 00:39:10.426 "method": "bdev_nvme_attach_controller" 00:39:10.426 }' 00:39:10.426 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:39:10.426 08:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:39:10.426 "params": { 00:39:10.426 "name": "Nvme1", 00:39:10.426 "trtype": "tcp", 00:39:10.426 "traddr": "10.0.0.2", 00:39:10.426 "adrfam": "ipv4", 00:39:10.426 "trsvcid": "4420", 00:39:10.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:10.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:10.426 "hdgst": false, 00:39:10.426 "ddgst": false 00:39:10.426 }, 00:39:10.426 "method": "bdev_nvme_attach_controller" 00:39:10.426 }' 00:39:10.426 [2024-11-20 08:36:14.995514] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:39:10.426 [2024-11-20 08:36:14.995569] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:39:10.426 [2024-11-20 08:36:14.995696] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:39:10.426 [2024-11-20 08:36:14.995744] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:39:10.426 [2024-11-20 08:36:14.996747] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:39:10.426 [2024-11-20 08:36:14.996792] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:39:10.426 [2024-11-20 08:36:14.999508] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:39:10.426 [2024-11-20 08:36:14.999558] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:39:10.687 [2024-11-20 08:36:15.170948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.687 [2024-11-20 08:36:15.200563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:39:10.687 [2024-11-20 08:36:15.217480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.687 [2024-11-20 08:36:15.246694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:10.687 [2024-11-20 08:36:15.263243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.687 [2024-11-20 08:36:15.291440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:10.687 [2024-11-20 08:36:15.323843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.687 [2024-11-20 08:36:15.352663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:10.687 Running I/O for 1 seconds... 00:39:10.687 Running I/O for 1 seconds... 00:39:10.948 Running I/O for 1 seconds... 00:39:10.948 Running I/O for 1 seconds... 00:39:11.892 12926.00 IOPS, 50.49 MiB/s 00:39:11.892 Latency(us) 00:39:11.892 [2024-11-20T07:36:16.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.892 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:39:11.892 Nvme1n1 : 1.01 12981.83 50.71 0.00 0.00 9828.29 2075.31 12233.39 00:39:11.892 [2024-11-20T07:36:16.621Z] =================================================================================================================== 00:39:11.892 [2024-11-20T07:36:16.621Z] Total : 12981.83 50.71 0.00 0.00 9828.29 2075.31 12233.39 00:39:11.892 13146.00 IOPS, 51.35 MiB/s 00:39:11.892 Latency(us) 00:39:11.892 [2024-11-20T07:36:16.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.892 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:39:11.892 Nvme1n1 : 1.01 13223.22 51.65 0.00 0.00 9652.84 4396.37 13926.40 00:39:11.892 [2024-11-20T07:36:16.621Z] =================================================================================================================== 00:39:11.892 [2024-11-20T07:36:16.621Z] Total : 13223.22 51.65 0.00 0.00 9652.84 4396.37 13926.40 00:39:11.892 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2274471 00:39:11.892 17045.00 IOPS, 66.58 MiB/s 00:39:11.892 Latency(us) 00:39:11.892 [2024-11-20T07:36:16.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.892 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:39:11.892 Nvme1n1 : 1.00 17102.31 66.81 0.00 0.00 7468.23 2525.87 11960.32 00:39:11.892 [2024-11-20T07:36:16.621Z] =================================================================================================================== 00:39:11.892 [2024-11-20T07:36:16.621Z] Total : 17102.31 66.81 0.00 0.00 7468.23 2525.87 11960.32 00:39:11.892 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2274475 00:39:11.892 187520.00 IOPS, 732.50 MiB/s 00:39:11.892 Latency(us) 00:39:11.892 [2024-11-20T07:36:16.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.892 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:39:11.892 Nvme1n1 : 1.00 187151.82 731.06 0.00 0.00 680.40 303.79 1966.08 00:39:11.892 [2024-11-20T07:36:16.621Z] =================================================================================================================== 00:39:11.892 [2024-11-20T07:36:16.621Z] Total : 187151.82 731.06 0.00 0.00 680.40 303.79 1966.08 00:39:12.153 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2274477 00:39:12.153 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:12.153 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.153 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:12.153 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.153 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:39:12.153 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:39:12.154 rmmod nvme_tcp 00:39:12.154 rmmod nvme_fabrics 00:39:12.154 rmmod nvme_keyring 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 2274229 ']' 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 2274229 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2274229 ']' 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2274229 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2274229 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2274229' 00:39:12.154 killing process with pid 2274229 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2274229 00:39:12.154 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2274229 00:39:12.416 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:39:12.416 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:39:12.416 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@254 -- # local dev 00:39:12.416 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # remove_target_ns 00:39:12.416 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:12.416 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:12.416 08:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # return 0 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@274 -- # iptr 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-save 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-restore 00:39:14.330 00:39:14.330 real 0m14.038s 00:39:14.330 user 0m15.242s 00:39:14.330 sys 0m8.289s 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:14.330 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:14.330 ************************************ 00:39:14.330 END TEST nvmf_bdev_io_wait 00:39:14.330 ************************************ 00:39:14.596 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@25 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:14.596 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:14.596 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:14.596 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:14.596 ************************************ 00:39:14.596 START TEST nvmf_queue_depth 00:39:14.596 ************************************ 00:39:14.596 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:14.596 * Looking for test storage... 00:39:14.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:14.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.597 --rc genhtml_branch_coverage=1 00:39:14.597 --rc genhtml_function_coverage=1 00:39:14.597 --rc genhtml_legend=1 00:39:14.597 --rc geninfo_all_blocks=1 00:39:14.597 --rc geninfo_unexecuted_blocks=1 00:39:14.597 00:39:14.597 ' 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:14.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.597 --rc genhtml_branch_coverage=1 00:39:14.597 --rc genhtml_function_coverage=1 00:39:14.597 --rc genhtml_legend=1 00:39:14.597 --rc geninfo_all_blocks=1 00:39:14.597 --rc geninfo_unexecuted_blocks=1 00:39:14.597 00:39:14.597 ' 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:14.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.597 --rc genhtml_branch_coverage=1 00:39:14.597 --rc genhtml_function_coverage=1 00:39:14.597 --rc genhtml_legend=1 00:39:14.597 --rc geninfo_all_blocks=1 00:39:14.597 --rc geninfo_unexecuted_blocks=1 00:39:14.597 00:39:14.597 ' 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:14.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.597 --rc genhtml_branch_coverage=1 00:39:14.597 --rc genhtml_function_coverage=1 00:39:14.597 --rc genhtml_legend=1 00:39:14.597 --rc geninfo_all_blocks=1 00:39:14.597 --rc geninfo_unexecuted_blocks=1 00:39:14.597 00:39:14.597 ' 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:39:14.597 08:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:24.604 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:24.604 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:24.604 Found net devices under 0000:31:00.0: cvl_0_0 00:39:24.604 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:24.605 Found net devices under 0000:31:00.1: cvl_0_1 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@247 -- # create_target_ns 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:39:24.605 10.0.0.1 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:39:24.605 10.0.0.2 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:24.605 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:39:24.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:24.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.511 ms 00:39:24.606 00:39:24.606 --- 10.0.0.1 ping statistics --- 00:39:24.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.606 rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:39:24.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:24.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:39:24.606 00:39:24.606 --- 10.0.0.2 ping statistics --- 00:39:24.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.606 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:24.606 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:39:24.607 08:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=2279628 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 2279628 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2279628 ']' 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:24.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:24.607 [2024-11-20 08:36:28.080407] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:24.607 [2024-11-20 08:36:28.081707] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:39:24.607 [2024-11-20 08:36:28.081767] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:24.607 [2024-11-20 08:36:28.195019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.607 [2024-11-20 08:36:28.244976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:24.607 [2024-11-20 08:36:28.245027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:24.607 [2024-11-20 08:36:28.245036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:24.607 [2024-11-20 08:36:28.245043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:24.607 [2024-11-20 08:36:28.245050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:24.607 [2024-11-20 08:36:28.245822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:24.607 [2024-11-20 08:36:28.320873] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:24.607 [2024-11-20 08:36:28.321156] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:24.607 [2024-11-20 08:36:28.930679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:24.607 Malloc0 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.607 08:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:24.607 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.607 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:24.607 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.607 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:24.607 [2024-11-20 08:36:29.010904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:24.607 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.607 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2279681 00:39:24.607 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:24.607 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:24.607 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2279681 /var/tmp/bdevperf.sock 00:39:24.607 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2279681 ']' 00:39:24.607 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:24.607 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:24.607 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:24.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:24.607 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:24.607 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:24.607 [2024-11-20 08:36:29.067114] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:39:24.607 [2024-11-20 08:36:29.067172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279681 ] 00:39:24.607 [2024-11-20 08:36:29.150626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.607 [2024-11-20 08:36:29.187021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:25.179 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:25.179 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:25.179 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:25.180 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.180 08:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:25.441 NVMe0n1 00:39:25.441 08:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.441 08:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:25.701 Running I/O for 10 seconds... 00:39:27.628 8773.00 IOPS, 34.27 MiB/s [2024-11-20T07:36:33.298Z] 8996.00 IOPS, 35.14 MiB/s [2024-11-20T07:36:34.682Z] 9276.33 IOPS, 36.24 MiB/s [2024-11-20T07:36:35.623Z] 9988.25 IOPS, 39.02 MiB/s [2024-11-20T07:36:36.563Z] 10429.60 IOPS, 40.74 MiB/s [2024-11-20T07:36:37.504Z] 10703.33 IOPS, 41.81 MiB/s [2024-11-20T07:36:38.444Z] 10855.43 IOPS, 42.40 MiB/s [2024-11-20T07:36:39.384Z] 11008.75 IOPS, 43.00 MiB/s [2024-11-20T07:36:40.325Z] 11124.44 IOPS, 43.45 MiB/s [2024-11-20T07:36:40.325Z] 11223.30 IOPS, 43.84 MiB/s 00:39:35.596 Latency(us) 00:39:35.596 [2024-11-20T07:36:40.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:35.596 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:35.596 Verification LBA range: start 0x0 length 0x4000 00:39:35.596 NVMe0n1 : 10.05 11249.93 43.95 0.00 0.00 90657.46 15619.41 75584.85 00:39:35.596 [2024-11-20T07:36:40.325Z] =================================================================================================================== 00:39:35.596 [2024-11-20T07:36:40.325Z] Total : 11249.93 43.95 0.00 0.00 90657.46 15619.41 75584.85 00:39:35.596 { 00:39:35.596 "results": [ 00:39:35.596 { 00:39:35.596 "job": "NVMe0n1", 00:39:35.596 "core_mask": "0x1", 00:39:35.596 "workload": "verify", 00:39:35.596 "status": "finished", 00:39:35.596 "verify_range": { 00:39:35.596 "start": 0, 00:39:35.596 "length": 16384 00:39:35.596 }, 00:39:35.596 "queue_depth": 1024, 00:39:35.596 "io_size": 4096, 00:39:35.596 "runtime": 10.053037, 00:39:35.596 "iops": 11249.93372649479, 00:39:35.596 "mibps": 43.94505361912027, 00:39:35.596 "io_failed": 0, 00:39:35.596 "io_timeout": 0, 00:39:35.596 "avg_latency_us": 90657.45824809601, 00:39:35.596 "min_latency_us": 15619.413333333334, 00:39:35.596 "max_latency_us": 75584.85333333333 00:39:35.596 } 00:39:35.596 ], 00:39:35.596 "core_count": 1 00:39:35.596 } 00:39:35.858 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2279681 00:39:35.858 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2279681 ']' 00:39:35.858 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2279681 00:39:35.858 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:35.858 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:35.858 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279681 00:39:35.858 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:35.858 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:35.858 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279681' 00:39:35.858 killing process with pid 2279681 00:39:35.858 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2279681 00:39:35.858 Received shutdown signal, test time was about 10.000000 seconds 00:39:35.858 00:39:35.858 Latency(us) 00:39:35.858 [2024-11-20T07:36:40.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:35.858 [2024-11-20T07:36:40.587Z] =================================================================================================================== 00:39:35.858 [2024-11-20T07:36:40.587Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:35.859 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2279681 00:39:35.859 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:35.859 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:35.859 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:39:35.859 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:39:35.859 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:39:35.859 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:39:35.859 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:39:35.859 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:39:35.859 rmmod nvme_tcp 00:39:35.859 rmmod nvme_fabrics 00:39:35.859 rmmod nvme_keyring 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 2279628 ']' 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 2279628 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2279628 ']' 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2279628 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279628 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279628' 00:39:36.120 killing process with pid 2279628 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2279628 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2279628 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@254 -- # local dev 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@257 -- # remove_target_ns 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:36.120 08:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@121 -- # return 0 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@274 -- # iptr 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-save 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-restore 00:39:38.670 00:39:38.670 real 0m23.749s 00:39:38.670 user 0m25.285s 00:39:38.670 sys 0m8.217s 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:38.670 ************************************ 00:39:38.670 END TEST nvmf_queue_depth 00:39:38.670 ************************************ 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:38.670 ************************************ 00:39:38.670 START TEST nvmf_nmic 00:39:38.670 ************************************ 00:39:38.670 08:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:38.670 * Looking for test storage... 00:39:38.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:38.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.670 --rc genhtml_branch_coverage=1 00:39:38.670 --rc genhtml_function_coverage=1 00:39:38.670 --rc genhtml_legend=1 00:39:38.670 --rc geninfo_all_blocks=1 00:39:38.670 --rc geninfo_unexecuted_blocks=1 00:39:38.670 00:39:38.670 ' 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:38.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.670 --rc genhtml_branch_coverage=1 00:39:38.670 --rc genhtml_function_coverage=1 00:39:38.670 --rc genhtml_legend=1 00:39:38.670 --rc geninfo_all_blocks=1 00:39:38.670 --rc geninfo_unexecuted_blocks=1 00:39:38.670 00:39:38.670 ' 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:38.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.670 --rc genhtml_branch_coverage=1 00:39:38.670 --rc genhtml_function_coverage=1 00:39:38.670 --rc genhtml_legend=1 00:39:38.670 --rc geninfo_all_blocks=1 00:39:38.670 --rc geninfo_unexecuted_blocks=1 00:39:38.670 00:39:38.670 ' 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:38.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.670 --rc genhtml_branch_coverage=1 00:39:38.670 --rc genhtml_function_coverage=1 00:39:38.670 --rc genhtml_legend=1 00:39:38.670 --rc geninfo_all_blocks=1 00:39:38.670 --rc geninfo_unexecuted_blocks=1 00:39:38.670 00:39:38.670 ' 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:38.670 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:39:38.671 08:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:46.826 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:46.826 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:39:46.826 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:39:46.826 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:39:46.826 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:46.827 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:46.827 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:46.827 Found net devices under 0000:31:00.0: cvl_0_0 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:46.827 Found net devices under 0000:31:00.1: cvl_0_1 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@247 -- # create_target_ns 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:39:46.827 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:39:46.828 10.0.0.1 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:39:46.828 10.0.0.2 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:39:46.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:46.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.489 ms 00:39:46.828 00:39:46.828 --- 10.0.0.1 ping statistics --- 00:39:46.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:46.828 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:39:46.828 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:39:46.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:46.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:39:46.829 00:39:46.829 --- 10.0.0.2 ping statistics --- 00:39:46.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:46.829 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:46.829 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=2286630 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 2286630 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2286630 ']' 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:47.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:47.091 08:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:47.091 [2024-11-20 08:36:51.656799] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:47.091 [2024-11-20 08:36:51.657959] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:39:47.091 [2024-11-20 08:36:51.658015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:47.091 [2024-11-20 08:36:51.749473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:47.091 [2024-11-20 08:36:51.793140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:47.091 [2024-11-20 08:36:51.793176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:47.091 [2024-11-20 08:36:51.793184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:47.091 [2024-11-20 08:36:51.793193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:47.091 [2024-11-20 08:36:51.793199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:47.091 [2024-11-20 08:36:51.794775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:47.091 [2024-11-20 08:36:51.794910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:47.091 [2024-11-20 08:36:51.795263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:47.091 [2024-11-20 08:36:51.795265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:47.353 [2024-11-20 08:36:51.851270] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:47.353 [2024-11-20 08:36:51.851322] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:47.353 [2024-11-20 08:36:51.852275] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:47.353 [2024-11-20 08:36:51.853086] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:47.353 [2024-11-20 08:36:51.853178] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:47.925 [2024-11-20 08:36:52.500176] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:47.925 Malloc0 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.925 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:47.926 [2024-11-20 08:36:52.576034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:47.926 test case1: single bdev can't be used in multiple subsystems 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:47.926 [2024-11-20 08:36:52.611759] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:47.926 [2024-11-20 08:36:52.611777] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:47.926 [2024-11-20 08:36:52.611785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.926 request: 00:39:47.926 { 00:39:47.926 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:47.926 "namespace": { 00:39:47.926 "bdev_name": "Malloc0", 00:39:47.926 "no_auto_visible": false 00:39:47.926 }, 00:39:47.926 "method": "nvmf_subsystem_add_ns", 00:39:47.926 "req_id": 1 00:39:47.926 } 00:39:47.926 Got JSON-RPC error response 00:39:47.926 response: 00:39:47.926 { 00:39:47.926 "code": -32602, 00:39:47.926 "message": "Invalid parameters" 00:39:47.926 } 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:47.926 Adding namespace failed - expected result. 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:47.926 test case2: host connect to nvmf target in multiple paths 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:47.926 [2024-11-20 08:36:52.623868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.926 08:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:48.498 08:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:49.070 08:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:49.070 08:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:39:49.070 08:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:49.070 08:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:49.070 08:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:39:50.983 08:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:50.983 08:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:50.983 08:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:50.983 08:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:50.983 08:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:50.983 08:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:39:50.983 08:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:50.983 [global] 00:39:50.983 thread=1 00:39:50.983 invalidate=1 00:39:50.983 rw=write 00:39:50.983 time_based=1 00:39:50.983 runtime=1 00:39:50.983 ioengine=libaio 00:39:50.983 direct=1 00:39:50.983 bs=4096 00:39:50.983 iodepth=1 00:39:50.983 norandommap=0 00:39:50.983 numjobs=1 00:39:50.983 00:39:50.983 verify_dump=1 00:39:50.983 verify_backlog=512 00:39:50.983 verify_state_save=0 00:39:50.983 do_verify=1 00:39:50.983 verify=crc32c-intel 00:39:50.983 [job0] 00:39:50.983 filename=/dev/nvme0n1 00:39:50.983 Could not set queue depth (nvme0n1) 00:39:51.244 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:51.244 fio-3.35 00:39:51.244 Starting 1 thread 00:39:52.629 00:39:52.629 job0: (groupid=0, jobs=1): err= 0: pid=2287586: Wed Nov 20 08:36:57 2024 00:39:52.629 read: IOPS=20, BW=80.8KiB/s (82.7kB/s)(84.0KiB/1040msec) 00:39:52.629 slat (nsec): min=10237, max=28590, avg=25792.38, stdev=3602.38 00:39:52.629 clat (usec): min=824, max=43021, avg=39470.95, stdev=8873.89 00:39:52.629 lat (usec): min=835, max=43048, avg=39496.74, stdev=8877.45 00:39:52.629 clat percentiles (usec): 00:39:52.629 | 1.00th=[ 824], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:39:52.629 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:52.629 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:39:52.629 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:39:52.629 | 99.99th=[43254] 00:39:52.629 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:39:52.629 slat (nsec): min=8856, max=62361, avg=28688.91, stdev=10875.84 00:39:52.629 clat (usec): min=156, max=639, avg=376.03, stdev=95.49 00:39:52.629 lat (usec): min=167, max=673, avg=404.72, stdev=99.10 00:39:52.629 clat percentiles (usec): 00:39:52.629 | 1.00th=[ 200], 5.00th=[ 219], 10.00th=[ 255], 20.00th=[ 297], 00:39:52.629 | 30.00th=[ 310], 40.00th=[ 326], 50.00th=[ 379], 60.00th=[ 408], 00:39:52.629 | 70.00th=[ 420], 80.00th=[ 465], 90.00th=[ 502], 95.00th=[ 537], 00:39:52.629 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[ 644], 99.95th=[ 644], 00:39:52.629 | 99.99th=[ 644] 00:39:52.629 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:52.629 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:52.629 lat (usec) : 250=9.01%, 500=75.42%, 750=11.63%, 1000=0.19% 00:39:52.629 lat (msec) : 50=3.75% 00:39:52.629 cpu : usr=0.77%, sys=2.02%, ctx=533, majf=0, minf=1 00:39:52.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:52.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.629 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:52.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:52.629 00:39:52.629 Run status group 0 (all jobs): 00:39:52.629 READ: bw=80.8KiB/s (82.7kB/s), 80.8KiB/s-80.8KiB/s (82.7kB/s-82.7kB/s), io=84.0KiB (86.0kB), run=1040-1040msec 00:39:52.629 WRITE: bw=1969KiB/s (2016kB/s), 1969KiB/s-1969KiB/s (2016kB/s-2016kB/s), io=2048KiB (2097kB), run=1040-1040msec 00:39:52.629 00:39:52.629 Disk stats (read/write): 00:39:52.629 nvme0n1: ios=67/512, merge=0/0, ticks=716/137, in_queue=853, util=93.39% 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:52.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:39:52.629 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:39:52.629 rmmod nvme_tcp 00:39:52.629 rmmod nvme_fabrics 00:39:52.629 rmmod nvme_keyring 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 2286630 ']' 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 2286630 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2286630 ']' 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2286630 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2286630 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2286630' 00:39:52.890 killing process with pid 2286630 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2286630 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2286630 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@254 -- # local dev 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@257 -- # remove_target_ns 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:52.890 08:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@121 -- # return 0 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@274 -- # iptr 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-save 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-restore 00:39:55.438 00:39:55.438 real 0m16.720s 00:39:55.438 user 0m39.556s 00:39:55.438 sys 0m8.149s 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:55.438 ************************************ 00:39:55.438 END TEST nvmf_nmic 00:39:55.438 ************************************ 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:55.438 ************************************ 00:39:55.438 START TEST nvmf_fio_target 00:39:55.438 ************************************ 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:55.438 * Looking for test storage... 00:39:55.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:55.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.438 --rc genhtml_branch_coverage=1 00:39:55.438 --rc genhtml_function_coverage=1 00:39:55.438 --rc genhtml_legend=1 00:39:55.438 --rc geninfo_all_blocks=1 00:39:55.438 --rc geninfo_unexecuted_blocks=1 00:39:55.438 00:39:55.438 ' 00:39:55.438 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:55.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.438 --rc genhtml_branch_coverage=1 00:39:55.439 --rc genhtml_function_coverage=1 00:39:55.439 --rc genhtml_legend=1 00:39:55.439 --rc geninfo_all_blocks=1 00:39:55.439 --rc geninfo_unexecuted_blocks=1 00:39:55.439 00:39:55.439 ' 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:55.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.439 --rc genhtml_branch_coverage=1 00:39:55.439 --rc genhtml_function_coverage=1 00:39:55.439 --rc genhtml_legend=1 00:39:55.439 --rc geninfo_all_blocks=1 00:39:55.439 --rc geninfo_unexecuted_blocks=1 00:39:55.439 00:39:55.439 ' 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:55.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.439 --rc genhtml_branch_coverage=1 00:39:55.439 --rc genhtml_function_coverage=1 00:39:55.439 --rc genhtml_legend=1 00:39:55.439 --rc geninfo_all_blocks=1 00:39:55.439 --rc geninfo_unexecuted_blocks=1 00:39:55.439 00:39:55.439 ' 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:39:55.439 08:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:03.589 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:03.589 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:03.589 Found net devices under 0000:31:00.0: cvl_0_0 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:03.589 Found net devices under 0000:31:00.1: cvl_0_1 00:40:03.589 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@247 -- # create_target_ns 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:40:03.590 10.0.0.1 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:40:03.590 10.0.0.2 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:40:03.590 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:40:03.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:03.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.587 ms 00:40:03.591 00:40:03.591 --- 10.0.0.1 ping statistics --- 00:40:03.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:03.591 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:40:03.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:03.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:40:03.591 00:40:03.591 --- 10.0.0.2 ping statistics --- 00:40:03.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:03.591 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:40:03.591 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=2292472 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 2292472 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2292472 ']' 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:03.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:03.592 08:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:03.592 [2024-11-20 08:37:08.024068] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:03.592 [2024-11-20 08:37:08.025218] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:40:03.592 [2024-11-20 08:37:08.025271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:03.592 [2024-11-20 08:37:08.115781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:03.592 [2024-11-20 08:37:08.157428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:03.592 [2024-11-20 08:37:08.157465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:03.592 [2024-11-20 08:37:08.157473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:03.592 [2024-11-20 08:37:08.157480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:03.592 [2024-11-20 08:37:08.157486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:03.592 [2024-11-20 08:37:08.159348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:03.592 [2024-11-20 08:37:08.159469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:03.592 [2024-11-20 08:37:08.159624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:03.592 [2024-11-20 08:37:08.159625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:03.592 [2024-11-20 08:37:08.216027] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:03.592 [2024-11-20 08:37:08.216074] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:03.592 [2024-11-20 08:37:08.217029] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:03.592 [2024-11-20 08:37:08.217753] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:03.592 [2024-11-20 08:37:08.217828] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:04.165 08:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:04.165 08:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:40:04.165 08:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:40:04.165 08:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:04.165 08:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:04.165 08:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:04.165 08:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:04.426 [2024-11-20 08:37:09.016221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:04.426 08:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:04.688 08:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:04.688 08:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:04.950 08:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:04.950 08:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:04.950 08:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:04.950 08:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:05.213 08:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:05.213 08:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:05.476 08:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:05.476 08:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:40:05.476 08:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:05.738 08:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:40:05.738 08:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:06.000 08:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:40:06.000 08:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:40:06.000 08:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:06.261 08:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:06.261 08:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:06.522 08:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:06.522 08:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:06.522 08:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:06.783 [2024-11-20 08:37:11.360330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:06.783 08:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:40:07.045 08:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:40:07.045 08:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:07.616 08:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:40:07.616 08:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:40:07.616 08:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:07.616 08:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:40:07.617 08:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:40:07.617 08:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:40:09.650 08:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:09.650 08:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:09.650 08:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:09.650 08:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:40:09.650 08:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:09.650 08:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:40:09.650 08:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:09.650 [global] 00:40:09.650 thread=1 00:40:09.650 invalidate=1 00:40:09.650 rw=write 00:40:09.650 time_based=1 00:40:09.650 runtime=1 00:40:09.650 ioengine=libaio 00:40:09.650 direct=1 00:40:09.650 bs=4096 00:40:09.650 iodepth=1 00:40:09.650 norandommap=0 00:40:09.650 numjobs=1 00:40:09.650 00:40:09.650 verify_dump=1 00:40:09.650 verify_backlog=512 00:40:09.650 verify_state_save=0 00:40:09.650 do_verify=1 00:40:09.650 verify=crc32c-intel 00:40:09.650 [job0] 00:40:09.650 filename=/dev/nvme0n1 00:40:09.650 [job1] 00:40:09.650 filename=/dev/nvme0n2 00:40:09.650 [job2] 00:40:09.650 filename=/dev/nvme0n3 00:40:09.650 [job3] 00:40:09.650 filename=/dev/nvme0n4 00:40:09.650 Could not set queue depth (nvme0n1) 00:40:09.650 Could not set queue depth (nvme0n2) 00:40:09.650 Could not set queue depth (nvme0n3) 00:40:09.650 Could not set queue depth (nvme0n4) 00:40:09.939 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:09.939 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:09.939 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:09.939 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:09.939 fio-3.35 00:40:09.939 Starting 4 threads 00:40:11.324 00:40:11.324 job0: (groupid=0, jobs=1): err= 0: pid=2293882: Wed Nov 20 08:37:15 2024 00:40:11.324 read: IOPS=17, BW=69.4KiB/s (71.1kB/s)(72.0KiB/1037msec) 00:40:11.324 slat (nsec): min=26206, max=27104, avg=26616.50, stdev=217.51 00:40:11.324 clat (usec): min=1169, max=42025, avg=39640.58, stdev=9604.23 00:40:11.324 lat (usec): min=1196, max=42052, avg=39667.19, stdev=9604.20 00:40:11.324 clat percentiles (usec): 00:40:11.324 | 1.00th=[ 1172], 5.00th=[ 1172], 10.00th=[41157], 20.00th=[41681], 00:40:11.324 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:40:11.324 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:11.324 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:11.324 | 99.99th=[42206] 00:40:11.324 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:40:11.324 slat (nsec): min=9301, max=70459, avg=30792.49, stdev=9422.61 00:40:11.324 clat (usec): min=261, max=873, avg=592.13, stdev=125.23 00:40:11.324 lat (usec): min=271, max=907, avg=622.92, stdev=128.49 00:40:11.324 clat percentiles (usec): 00:40:11.324 | 1.00th=[ 334], 5.00th=[ 371], 10.00th=[ 416], 20.00th=[ 490], 00:40:11.324 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 627], 00:40:11.324 | 70.00th=[ 660], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 791], 00:40:11.324 | 99.00th=[ 832], 99.50th=[ 865], 99.90th=[ 873], 99.95th=[ 873], 00:40:11.324 | 99.99th=[ 873] 00:40:11.324 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:40:11.324 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:11.324 lat (usec) : 500=23.02%, 750=63.02%, 1000=10.57% 00:40:11.324 lat (msec) : 2=0.19%, 50=3.21% 00:40:11.324 cpu : usr=1.25%, sys=1.74%, ctx=530, majf=0, minf=1 00:40:11.324 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:11.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:11.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:11.324 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:11.324 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:11.324 job1: (groupid=0, jobs=1): err= 0: pid=2293884: Wed Nov 20 08:37:15 2024 00:40:11.324 read: IOPS=17, BW=71.2KiB/s (72.9kB/s)(72.0KiB/1011msec) 00:40:11.324 slat (nsec): min=25647, max=27790, avg=26160.83, stdev=459.45 00:40:11.324 clat (usec): min=933, max=42020, avg=39464.78, stdev=9624.75 00:40:11.324 lat (usec): min=958, max=42046, avg=39490.94, stdev=9624.84 00:40:11.324 clat percentiles (usec): 00:40:11.324 | 1.00th=[ 930], 5.00th=[ 930], 10.00th=[41157], 20.00th=[41157], 00:40:11.324 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:40:11.324 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:11.324 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:11.324 | 99.99th=[42206] 00:40:11.325 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:40:11.325 slat (nsec): min=9888, max=60858, avg=30612.12, stdev=10732.57 00:40:11.325 clat (usec): min=165, max=927, avg=548.03, stdev=149.70 00:40:11.325 lat (usec): min=176, max=968, avg=578.65, stdev=152.18 00:40:11.325 clat percentiles (usec): 00:40:11.325 | 1.00th=[ 258], 5.00th=[ 297], 10.00th=[ 363], 20.00th=[ 408], 00:40:11.325 | 30.00th=[ 461], 40.00th=[ 506], 50.00th=[ 553], 60.00th=[ 594], 00:40:11.325 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 742], 95.00th=[ 816], 00:40:11.325 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 930], 99.95th=[ 930], 00:40:11.325 | 99.99th=[ 930] 00:40:11.325 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:40:11.325 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:11.325 lat (usec) : 250=0.94%, 500=35.47%, 750=50.75%, 1000=9.62% 00:40:11.325 lat (msec) : 50=3.21% 00:40:11.325 cpu : usr=0.20%, sys=1.98%, ctx=531, majf=0, minf=1 00:40:11.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:11.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:11.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:11.325 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:11.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:11.325 job2: (groupid=0, jobs=1): err= 0: pid=2293886: Wed Nov 20 08:37:15 2024 00:40:11.325 read: IOPS=17, BW=69.2KiB/s (70.9kB/s)(72.0KiB/1040msec) 00:40:11.325 slat (nsec): min=27683, max=28911, avg=28160.67, stdev=292.57 00:40:11.325 clat (usec): min=1012, max=42087, avg=39513.12, stdev=9616.01 00:40:11.325 lat (usec): min=1040, max=42115, avg=39541.28, stdev=9616.06 00:40:11.325 clat percentiles (usec): 00:40:11.325 | 1.00th=[ 1012], 5.00th=[ 1012], 10.00th=[41157], 20.00th=[41157], 00:40:11.325 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:40:11.325 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:11.325 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:11.325 | 99.99th=[42206] 00:40:11.325 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:40:11.325 slat (nsec): min=9964, max=62797, avg=32095.95, stdev=10221.06 00:40:11.325 clat (usec): min=203, max=981, avg=600.75, stdev=133.79 00:40:11.325 lat (usec): min=214, max=1021, avg=632.84, stdev=137.01 00:40:11.325 clat percentiles (usec): 00:40:11.325 | 1.00th=[ 293], 5.00th=[ 379], 10.00th=[ 437], 20.00th=[ 494], 00:40:11.325 | 30.00th=[ 523], 40.00th=[ 562], 50.00th=[ 603], 60.00th=[ 635], 00:40:11.325 | 70.00th=[ 668], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 816], 00:40:11.325 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 979], 99.95th=[ 979], 00:40:11.325 | 99.99th=[ 979] 00:40:11.325 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:40:11.325 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:11.325 lat (usec) : 250=0.38%, 500=20.19%, 750=63.02%, 1000=13.02% 00:40:11.325 lat (msec) : 2=0.19%, 50=3.21% 00:40:11.325 cpu : usr=0.67%, sys=2.31%, ctx=532, majf=0, minf=1 00:40:11.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:11.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:11.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:11.325 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:11.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:11.325 job3: (groupid=0, jobs=1): err= 0: pid=2293887: Wed Nov 20 08:37:15 2024 00:40:11.325 read: IOPS=18, BW=74.7KiB/s (76.4kB/s)(76.0KiB/1018msec) 00:40:11.325 slat (nsec): min=25224, max=42939, avg=26505.47, stdev=3993.92 00:40:11.325 clat (usec): min=27972, max=41753, avg=40351.18, stdev=3006.15 00:40:11.325 lat (usec): min=27998, max=41779, avg=40377.69, stdev=3006.58 00:40:11.325 clat percentiles (usec): 00:40:11.325 | 1.00th=[27919], 5.00th=[27919], 10.00th=[40633], 20.00th=[41157], 00:40:11.325 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:11.325 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:40:11.325 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:40:11.325 | 99.99th=[41681] 00:40:11.325 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:40:11.325 slat (nsec): min=9407, max=69098, avg=28086.64, stdev=9781.34 00:40:11.325 clat (usec): min=155, max=2815, avg=455.73, stdev=175.68 00:40:11.325 lat (usec): min=168, max=2848, avg=483.81, stdev=178.46 00:40:11.325 clat percentiles (usec): 00:40:11.325 | 1.00th=[ 190], 5.00th=[ 235], 10.00th=[ 285], 20.00th=[ 326], 00:40:11.325 | 30.00th=[ 355], 40.00th=[ 392], 50.00th=[ 441], 60.00th=[ 482], 00:40:11.325 | 70.00th=[ 529], 80.00th=[ 586], 90.00th=[ 635], 95.00th=[ 693], 00:40:11.325 | 99.00th=[ 799], 99.50th=[ 873], 99.90th=[ 2802], 99.95th=[ 2802], 00:40:11.325 | 99.99th=[ 2802] 00:40:11.325 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:40:11.325 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:11.325 lat (usec) : 250=6.21%, 500=55.74%, 750=31.83%, 1000=2.45% 00:40:11.325 lat (msec) : 4=0.19%, 50=3.58% 00:40:11.325 cpu : usr=0.98%, sys=1.08%, ctx=531, majf=0, minf=2 00:40:11.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:11.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:11.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:11.325 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:11.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:11.325 00:40:11.325 Run status group 0 (all jobs): 00:40:11.325 READ: bw=281KiB/s (288kB/s), 69.2KiB/s-74.7KiB/s (70.9kB/s-76.4kB/s), io=292KiB (299kB), run=1011-1040msec 00:40:11.325 WRITE: bw=7877KiB/s (8066kB/s), 1969KiB/s-2026KiB/s (2016kB/s-2074kB/s), io=8192KiB (8389kB), run=1011-1040msec 00:40:11.325 00:40:11.325 Disk stats (read/write): 00:40:11.325 nvme0n1: ios=63/512, merge=0/0, ticks=566/241, in_queue=807, util=87.58% 00:40:11.325 nvme0n2: ios=58/512, merge=0/0, ticks=891/253, in_queue=1144, util=97.96% 00:40:11.325 nvme0n3: ios=61/512, merge=0/0, ticks=857/255, in_queue=1112, util=97.78% 00:40:11.325 nvme0n4: ios=31/512, merge=0/0, ticks=1057/220, in_queue=1277, util=91.32% 00:40:11.325 08:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:11.325 [global] 00:40:11.325 thread=1 00:40:11.325 invalidate=1 00:40:11.325 rw=randwrite 00:40:11.325 time_based=1 00:40:11.325 runtime=1 00:40:11.325 ioengine=libaio 00:40:11.325 direct=1 00:40:11.325 bs=4096 00:40:11.325 iodepth=1 00:40:11.325 norandommap=0 00:40:11.325 numjobs=1 00:40:11.325 00:40:11.325 verify_dump=1 00:40:11.325 verify_backlog=512 00:40:11.325 verify_state_save=0 00:40:11.325 do_verify=1 00:40:11.325 verify=crc32c-intel 00:40:11.325 [job0] 00:40:11.325 filename=/dev/nvme0n1 00:40:11.325 [job1] 00:40:11.325 filename=/dev/nvme0n2 00:40:11.325 [job2] 00:40:11.325 filename=/dev/nvme0n3 00:40:11.325 [job3] 00:40:11.325 filename=/dev/nvme0n4 00:40:11.325 Could not set queue depth (nvme0n1) 00:40:11.325 Could not set queue depth (nvme0n2) 00:40:11.325 Could not set queue depth (nvme0n3) 00:40:11.325 Could not set queue depth (nvme0n4) 00:40:11.585 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:11.585 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:11.585 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:11.585 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:11.585 fio-3.35 00:40:11.585 Starting 4 threads 00:40:12.970 00:40:12.970 job0: (groupid=0, jobs=1): err= 0: pid=2294413: Wed Nov 20 08:37:17 2024 00:40:12.970 read: IOPS=29, BW=120KiB/s (123kB/s)(120KiB/1002msec) 00:40:12.970 slat (nsec): min=7108, max=29196, avg=20615.07, stdev=8498.91 00:40:12.970 clat (usec): min=589, max=42123, avg=22167.36, stdev=20460.97 00:40:12.970 lat (usec): min=598, max=42149, avg=22187.97, stdev=20467.25 00:40:12.970 clat percentiles (usec): 00:40:12.970 | 1.00th=[ 586], 5.00th=[ 611], 10.00th=[ 627], 20.00th=[ 709], 00:40:12.970 | 30.00th=[ 881], 40.00th=[ 1045], 50.00th=[27657], 60.00th=[41157], 00:40:12.970 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:40:12.970 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:12.970 | 99.99th=[42206] 00:40:12.970 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:40:12.970 slat (nsec): min=9880, max=67990, avg=31235.51, stdev=8087.89 00:40:12.970 clat (usec): min=148, max=949, avg=609.56, stdev=145.08 00:40:12.970 lat (usec): min=181, max=983, avg=640.79, stdev=147.08 00:40:12.970 clat percentiles (usec): 00:40:12.970 | 1.00th=[ 277], 5.00th=[ 355], 10.00th=[ 412], 20.00th=[ 482], 00:40:12.970 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 627], 60.00th=[ 660], 00:40:12.970 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 840], 00:40:12.970 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 947], 99.95th=[ 947], 00:40:12.970 | 99.99th=[ 947] 00:40:12.970 bw ( KiB/s): min= 4096, max= 4096, per=36.83%, avg=4096.00, stdev= 0.00, samples=1 00:40:12.970 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:12.970 lat (usec) : 250=0.18%, 500=20.85%, 750=59.04%, 1000=16.42% 00:40:12.970 lat (msec) : 2=0.55%, 50=2.95% 00:40:12.970 cpu : usr=0.40%, sys=2.10%, ctx=545, majf=0, minf=1 00:40:12.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:12.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.970 issued rwts: total=30,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:12.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:12.970 job1: (groupid=0, jobs=1): err= 0: pid=2294414: Wed Nov 20 08:37:17 2024 00:40:12.970 read: IOPS=683, BW=2733KiB/s (2799kB/s)(2736KiB/1001msec) 00:40:12.970 slat (nsec): min=6445, max=56948, avg=24640.83, stdev=6730.43 00:40:12.970 clat (usec): min=314, max=1038, avg=754.14, stdev=126.23 00:40:12.970 lat (usec): min=340, max=1064, avg=778.78, stdev=128.33 00:40:12.970 clat percentiles (usec): 00:40:12.970 | 1.00th=[ 392], 5.00th=[ 519], 10.00th=[ 594], 20.00th=[ 652], 00:40:12.970 | 30.00th=[ 709], 40.00th=[ 742], 50.00th=[ 775], 60.00th=[ 799], 00:40:12.970 | 70.00th=[ 824], 80.00th=[ 857], 90.00th=[ 898], 95.00th=[ 930], 00:40:12.970 | 99.00th=[ 988], 99.50th=[ 996], 99.90th=[ 1037], 99.95th=[ 1037], 00:40:12.970 | 99.99th=[ 1037] 00:40:12.970 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:40:12.970 slat (nsec): min=8753, max=90877, avg=29479.02, stdev=8946.67 00:40:12.970 clat (usec): min=114, max=737, avg=414.07, stdev=106.43 00:40:12.970 lat (usec): min=124, max=768, avg=443.55, stdev=108.58 00:40:12.970 clat percentiles (usec): 00:40:12.970 | 1.00th=[ 204], 5.00th=[ 241], 10.00th=[ 289], 20.00th=[ 318], 00:40:12.970 | 30.00th=[ 343], 40.00th=[ 371], 50.00th=[ 412], 60.00th=[ 445], 00:40:12.970 | 70.00th=[ 474], 80.00th=[ 515], 90.00th=[ 553], 95.00th=[ 594], 00:40:12.970 | 99.00th=[ 644], 99.50th=[ 652], 99.90th=[ 725], 99.95th=[ 734], 00:40:12.970 | 99.99th=[ 734] 00:40:12.970 bw ( KiB/s): min= 4096, max= 4096, per=36.83%, avg=4096.00, stdev= 0.00, samples=1 00:40:12.970 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:12.970 lat (usec) : 250=3.51%, 500=44.85%, 750=28.63%, 1000=22.83% 00:40:12.970 lat (msec) : 2=0.18% 00:40:12.970 cpu : usr=3.10%, sys=6.80%, ctx=1709, majf=0, minf=2 00:40:12.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:12.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.970 issued rwts: total=684,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:12.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:12.970 job2: (groupid=0, jobs=1): err= 0: pid=2294415: Wed Nov 20 08:37:17 2024 00:40:12.970 read: IOPS=188, BW=755KiB/s (773kB/s)(756KiB/1001msec) 00:40:12.970 slat (nsec): min=2207, max=31250, avg=21002.07, stdev=8044.87 00:40:12.970 clat (usec): min=558, max=42186, avg=3345.30, stdev=9586.84 00:40:12.970 lat (usec): min=563, max=42212, avg=3366.30, stdev=9587.76 00:40:12.970 clat percentiles (usec): 00:40:12.970 | 1.00th=[ 652], 5.00th=[ 725], 10.00th=[ 824], 20.00th=[ 906], 00:40:12.970 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1020], 00:40:12.970 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1123], 95.00th=[41157], 00:40:12.970 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:12.970 | 99.99th=[42206] 00:40:12.970 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:40:12.970 slat (nsec): min=9826, max=52267, avg=30493.85, stdev=8010.09 00:40:12.970 clat (usec): min=202, max=1065, avg=670.93, stdev=139.51 00:40:12.970 lat (usec): min=214, max=1115, avg=701.42, stdev=141.56 00:40:12.970 clat percentiles (usec): 00:40:12.970 | 1.00th=[ 318], 5.00th=[ 433], 10.00th=[ 498], 20.00th=[ 553], 00:40:12.970 | 30.00th=[ 603], 40.00th=[ 644], 50.00th=[ 676], 60.00th=[ 709], 00:40:12.970 | 70.00th=[ 742], 80.00th=[ 791], 90.00th=[ 857], 95.00th=[ 889], 00:40:12.970 | 99.00th=[ 979], 99.50th=[ 1020], 99.90th=[ 1074], 99.95th=[ 1074], 00:40:12.970 | 99.99th=[ 1074] 00:40:12.970 bw ( KiB/s): min= 4096, max= 4096, per=36.83%, avg=4096.00, stdev= 0.00, samples=1 00:40:12.970 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:12.970 lat (usec) : 250=0.14%, 500=7.42%, 750=46.08%, 1000=32.52% 00:40:12.970 lat (msec) : 2=12.27%, 50=1.57% 00:40:12.970 cpu : usr=1.50%, sys=1.80%, ctx=701, majf=0, minf=1 00:40:12.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:12.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.970 issued rwts: total=189,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:12.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:12.970 job3: (groupid=0, jobs=1): err= 0: pid=2294416: Wed Nov 20 08:37:17 2024 00:40:12.970 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:40:12.970 slat (nsec): min=8179, max=44048, avg=25329.28, stdev=1961.87 00:40:12.970 clat (usec): min=575, max=1312, avg=986.78, stdev=108.06 00:40:12.970 lat (usec): min=601, max=1338, avg=1012.11, stdev=108.11 00:40:12.970 clat percentiles (usec): 00:40:12.971 | 1.00th=[ 676], 5.00th=[ 799], 10.00th=[ 857], 20.00th=[ 906], 00:40:12.971 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[ 996], 60.00th=[ 1020], 00:40:12.971 | 70.00th=[ 1037], 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1156], 00:40:12.971 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1319], 99.95th=[ 1319], 00:40:12.971 | 99.99th=[ 1319] 00:40:12.971 write: IOPS=737, BW=2949KiB/s (3020kB/s)(2952KiB/1001msec); 0 zone resets 00:40:12.971 slat (nsec): min=9377, max=66412, avg=29731.90, stdev=7088.25 00:40:12.971 clat (usec): min=187, max=1025, avg=609.87, stdev=142.89 00:40:12.971 lat (usec): min=198, max=1056, avg=639.60, stdev=144.65 00:40:12.971 clat percentiles (usec): 00:40:12.971 | 1.00th=[ 258], 5.00th=[ 371], 10.00th=[ 437], 20.00th=[ 494], 00:40:12.971 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 644], 00:40:12.971 | 70.00th=[ 676], 80.00th=[ 725], 90.00th=[ 799], 95.00th=[ 865], 00:40:12.971 | 99.00th=[ 955], 99.50th=[ 979], 99.90th=[ 1029], 99.95th=[ 1029], 00:40:12.971 | 99.99th=[ 1029] 00:40:12.971 bw ( KiB/s): min= 4096, max= 4096, per=36.83%, avg=4096.00, stdev= 0.00, samples=1 00:40:12.971 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:12.971 lat (usec) : 250=0.48%, 500=11.84%, 750=38.80%, 1000=29.36% 00:40:12.971 lat (msec) : 2=19.52% 00:40:12.971 cpu : usr=2.80%, sys=2.80%, ctx=1250, majf=0, minf=2 00:40:12.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:12.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.971 issued rwts: total=512,738,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:12.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:12.971 00:40:12.971 Run status group 0 (all jobs): 00:40:12.971 READ: bw=5649KiB/s (5784kB/s), 120KiB/s-2733KiB/s (123kB/s-2799kB/s), io=5660KiB (5796kB), run=1001-1002msec 00:40:12.971 WRITE: bw=10.9MiB/s (11.4MB/s), 2044KiB/s-4092KiB/s (2093kB/s-4190kB/s), io=10.9MiB (11.4MB), run=1001-1002msec 00:40:12.971 00:40:12.971 Disk stats (read/write): 00:40:12.971 nvme0n1: ios=52/512, merge=0/0, ticks=1488/295, in_queue=1783, util=95.99% 00:40:12.971 nvme0n2: ios=549/971, merge=0/0, ticks=374/314, in_queue=688, util=87.65% 00:40:12.971 nvme0n3: ios=25/512, merge=0/0, ticks=472/321, in_queue=793, util=88.48% 00:40:12.971 nvme0n4: ios=486/512, merge=0/0, ticks=468/313, in_queue=781, util=89.52% 00:40:12.971 08:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:40:12.971 [global] 00:40:12.971 thread=1 00:40:12.971 invalidate=1 00:40:12.971 rw=write 00:40:12.971 time_based=1 00:40:12.971 runtime=1 00:40:12.971 ioengine=libaio 00:40:12.971 direct=1 00:40:12.971 bs=4096 00:40:12.971 iodepth=128 00:40:12.971 norandommap=0 00:40:12.971 numjobs=1 00:40:12.971 00:40:12.971 verify_dump=1 00:40:12.971 verify_backlog=512 00:40:12.971 verify_state_save=0 00:40:12.971 do_verify=1 00:40:12.971 verify=crc32c-intel 00:40:12.971 [job0] 00:40:12.971 filename=/dev/nvme0n1 00:40:12.971 [job1] 00:40:12.971 filename=/dev/nvme0n2 00:40:12.971 [job2] 00:40:12.971 filename=/dev/nvme0n3 00:40:12.971 [job3] 00:40:12.971 filename=/dev/nvme0n4 00:40:12.971 Could not set queue depth (nvme0n1) 00:40:12.971 Could not set queue depth (nvme0n2) 00:40:12.971 Could not set queue depth (nvme0n3) 00:40:12.971 Could not set queue depth (nvme0n4) 00:40:13.230 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:13.230 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:13.230 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:13.230 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:13.230 fio-3.35 00:40:13.230 Starting 4 threads 00:40:14.615 00:40:14.615 job0: (groupid=0, jobs=1): err= 0: pid=2294928: Wed Nov 20 08:37:19 2024 00:40:14.615 read: IOPS=7712, BW=30.1MiB/s (31.6MB/s)(30.3MiB/1007msec) 00:40:14.615 slat (nsec): min=954, max=7011.2k, avg=63609.88, stdev=500182.65 00:40:14.615 clat (usec): min=2645, max=17903, avg=8593.01, stdev=2160.88 00:40:14.615 lat (usec): min=2652, max=18652, avg=8656.62, stdev=2194.88 00:40:14.615 clat percentiles (usec): 00:40:14.615 | 1.00th=[ 5342], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 6980], 00:40:14.615 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8225], 00:40:14.615 | 70.00th=[ 9241], 80.00th=[10421], 90.00th=[12256], 95.00th=[12780], 00:40:14.615 | 99.00th=[13960], 99.50th=[14484], 99.90th=[17957], 99.95th=[17957], 00:40:14.615 | 99.99th=[17957] 00:40:14.615 write: IOPS=8135, BW=31.8MiB/s (33.3MB/s)(32.0MiB/1007msec); 0 zone resets 00:40:14.615 slat (nsec): min=1625, max=11079k, avg=57152.38, stdev=397089.80 00:40:14.615 clat (usec): min=1719, max=16930, avg=7442.88, stdev=1887.79 00:40:14.615 lat (usec): min=1727, max=16949, avg=7500.03, stdev=1902.46 00:40:14.615 clat percentiles (usec): 00:40:14.615 | 1.00th=[ 3195], 5.00th=[ 4686], 10.00th=[ 5080], 20.00th=[ 5866], 00:40:14.615 | 30.00th=[ 6456], 40.00th=[ 7242], 50.00th=[ 7701], 60.00th=[ 7963], 00:40:14.615 | 70.00th=[ 8094], 80.00th=[ 8225], 90.00th=[10290], 95.00th=[11076], 00:40:14.615 | 99.00th=[13173], 99.50th=[13173], 99.90th=[14484], 99.95th=[14615], 00:40:14.615 | 99.99th=[16909] 00:40:14.615 bw ( KiB/s): min=32440, max=32768, per=30.18%, avg=32604.00, stdev=231.93, samples=2 00:40:14.615 iops : min= 8110, max= 8192, avg=8151.00, stdev=57.98, samples=2 00:40:14.615 lat (msec) : 2=0.08%, 4=1.19%, 10=81.57%, 20=17.16% 00:40:14.615 cpu : usr=5.37%, sys=7.06%, ctx=657, majf=0, minf=1 00:40:14.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:40:14.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:14.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:14.615 issued rwts: total=7766,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:14.615 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:14.615 job1: (groupid=0, jobs=1): err= 0: pid=2294929: Wed Nov 20 08:37:19 2024 00:40:14.615 read: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec) 00:40:14.615 slat (nsec): min=907, max=7909.7k, avg=67390.34, stdev=553159.89 00:40:14.615 clat (usec): min=2731, max=16252, avg=8714.75, stdev=1966.74 00:40:14.615 lat (usec): min=2742, max=19010, avg=8782.14, stdev=2013.37 00:40:14.615 clat percentiles (usec): 00:40:14.615 | 1.00th=[ 5866], 5.00th=[ 6718], 10.00th=[ 7046], 20.00th=[ 7439], 00:40:14.615 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8356], 00:40:14.615 | 70.00th=[ 8717], 80.00th=[ 9634], 90.00th=[11994], 95.00th=[13566], 00:40:14.615 | 99.00th=[14615], 99.50th=[14877], 99.90th=[15795], 99.95th=[15795], 00:40:14.615 | 99.99th=[16188] 00:40:14.615 write: IOPS=7894, BW=30.8MiB/s (32.3MB/s)(31.1MiB/1007msec); 0 zone resets 00:40:14.615 slat (nsec): min=1644, max=8925.0k, avg=56034.43, stdev=388315.95 00:40:14.615 clat (usec): min=1600, max=16731, avg=7649.01, stdev=2030.71 00:40:14.615 lat (usec): min=1608, max=16740, avg=7705.05, stdev=2040.69 00:40:14.615 clat percentiles (usec): 00:40:14.615 | 1.00th=[ 2900], 5.00th=[ 4555], 10.00th=[ 5080], 20.00th=[ 5604], 00:40:14.615 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8160], 00:40:14.615 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[10290], 95.00th=[10683], 00:40:14.615 | 99.00th=[13960], 99.50th=[16712], 99.90th=[16712], 99.95th=[16712], 00:40:14.615 | 99.99th=[16712] 00:40:14.615 bw ( KiB/s): min=29824, max=32760, per=28.96%, avg=31292.00, stdev=2076.07, samples=2 00:40:14.615 iops : min= 7456, max= 8190, avg=7823.00, stdev=519.02, samples=2 00:40:14.615 lat (msec) : 2=0.12%, 4=1.47%, 10=83.04%, 20=15.37% 00:40:14.615 cpu : usr=5.96%, sys=6.06%, ctx=605, majf=0, minf=1 00:40:14.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:40:14.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:14.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:14.615 issued rwts: total=7680,7950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:14.615 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:14.615 job2: (groupid=0, jobs=1): err= 0: pid=2294930: Wed Nov 20 08:37:19 2024 00:40:14.615 read: IOPS=4491, BW=17.5MiB/s (18.4MB/s)(18.3MiB/1045msec) 00:40:14.615 slat (nsec): min=921, max=6300.6k, avg=103997.87, stdev=648913.14 00:40:14.615 clat (usec): min=8179, max=52662, avg=13943.48, stdev=5223.59 00:40:14.615 lat (usec): min=8184, max=52671, avg=14047.48, stdev=5242.29 00:40:14.615 clat percentiles (usec): 00:40:14.615 | 1.00th=[ 8979], 5.00th=[10028], 10.00th=[10814], 20.00th=[11863], 00:40:14.615 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:40:14.615 | 70.00th=[13960], 80.00th=[14615], 90.00th=[15664], 95.00th=[16909], 00:40:14.615 | 99.00th=[47449], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:40:14.615 | 99.99th=[52691] 00:40:14.615 write: IOPS=4899, BW=19.1MiB/s (20.1MB/s)(20.0MiB/1045msec); 0 zone resets 00:40:14.615 slat (nsec): min=1613, max=14548k, avg=95114.82, stdev=634826.00 00:40:14.615 clat (usec): min=814, max=58673, avg=13108.14, stdev=4306.30 00:40:14.615 lat (usec): min=822, max=58680, avg=13203.25, stdev=4335.93 00:40:14.615 clat percentiles (usec): 00:40:14.615 | 1.00th=[ 7177], 5.00th=[ 9110], 10.00th=[11731], 20.00th=[12256], 00:40:14.615 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12649], 60.00th=[12780], 00:40:14.615 | 70.00th=[12911], 80.00th=[13173], 90.00th=[14746], 95.00th=[17171], 00:40:14.615 | 99.00th=[19530], 99.50th=[52691], 99.90th=[58459], 99.95th=[58459], 00:40:14.615 | 99.99th=[58459] 00:40:14.615 bw ( KiB/s): min=20144, max=20480, per=18.80%, avg=20312.00, stdev=237.59, samples=2 00:40:14.615 iops : min= 5036, max= 5120, avg=5078.00, stdev=59.40, samples=2 00:40:14.615 lat (usec) : 1000=0.03% 00:40:14.615 lat (msec) : 2=0.09%, 10=5.79%, 20=92.76%, 50=0.47%, 100=0.87% 00:40:14.615 cpu : usr=3.26%, sys=4.79%, ctx=398, majf=0, minf=2 00:40:14.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:40:14.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:14.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:14.615 issued rwts: total=4694,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:14.615 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:14.616 job3: (groupid=0, jobs=1): err= 0: pid=2294931: Wed Nov 20 08:37:19 2024 00:40:14.616 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:40:14.616 slat (nsec): min=957, max=8627.7k, avg=76367.63, stdev=614767.15 00:40:14.616 clat (usec): min=2809, max=17625, avg=9843.72, stdev=2299.34 00:40:14.616 lat (usec): min=2817, max=18392, avg=9920.09, stdev=2352.68 00:40:14.616 clat percentiles (usec): 00:40:14.616 | 1.00th=[ 6128], 5.00th=[ 7242], 10.00th=[ 7963], 20.00th=[ 8455], 00:40:14.616 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:40:14.616 | 70.00th=[ 9896], 80.00th=[11076], 90.00th=[13829], 95.00th=[15139], 00:40:14.616 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17171], 99.95th=[17695], 00:40:14.616 | 99.99th=[17695] 00:40:14.616 write: IOPS=6944, BW=27.1MiB/s (28.4MB/s)(27.2MiB/1003msec); 0 zone resets 00:40:14.616 slat (nsec): min=1637, max=12633k, avg=64027.44, stdev=475003.67 00:40:14.616 clat (usec): min=760, max=23502, avg=8857.26, stdev=2255.19 00:40:14.616 lat (usec): min=786, max=23511, avg=8921.29, stdev=2263.05 00:40:14.616 clat percentiles (usec): 00:40:14.616 | 1.00th=[ 4228], 5.00th=[ 5669], 10.00th=[ 5932], 20.00th=[ 6849], 00:40:14.616 | 30.00th=[ 7767], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9241], 00:40:14.616 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[12256], 95.00th=[12911], 00:40:14.616 | 99.00th=[15139], 99.50th=[15270], 99.90th=[20579], 99.95th=[20579], 00:40:14.616 | 99.99th=[23462] 00:40:14.616 bw ( KiB/s): min=26032, max=28672, per=25.32%, avg=27352.00, stdev=1866.76, samples=2 00:40:14.616 iops : min= 6508, max= 7168, avg=6838.00, stdev=466.69, samples=2 00:40:14.616 lat (usec) : 1000=0.01% 00:40:14.616 lat (msec) : 2=0.08%, 4=0.41%, 10=76.50%, 20=22.94%, 50=0.05% 00:40:14.616 cpu : usr=4.69%, sys=6.89%, ctx=505, majf=0, minf=1 00:40:14.616 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:40:14.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:14.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:14.616 issued rwts: total=6656,6965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:14.616 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:14.616 00:40:14.616 Run status group 0 (all jobs): 00:40:14.616 READ: bw=100MiB/s (105MB/s), 17.5MiB/s-30.1MiB/s (18.4MB/s-31.6MB/s), io=105MiB (110MB), run=1003-1045msec 00:40:14.616 WRITE: bw=106MiB/s (111MB/s), 19.1MiB/s-31.8MiB/s (20.1MB/s-33.3MB/s), io=110MiB (116MB), run=1003-1045msec 00:40:14.616 00:40:14.616 Disk stats (read/write): 00:40:14.616 nvme0n1: ios=6492/6656, merge=0/0, ticks=53560/47026, in_queue=100586, util=84.37% 00:40:14.616 nvme0n2: ios=6232/6656, merge=0/0, ticks=52089/48516, in_queue=100605, util=86.94% 00:40:14.616 nvme0n3: ios=4079/4096, merge=0/0, ticks=26728/25513, in_queue=52241, util=90.27% 00:40:14.616 nvme0n4: ios=5632/5686, merge=0/0, ticks=52967/48263, in_queue=101230, util=89.41% 00:40:14.616 08:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:40:14.616 [global] 00:40:14.616 thread=1 00:40:14.616 invalidate=1 00:40:14.616 rw=randwrite 00:40:14.616 time_based=1 00:40:14.616 runtime=1 00:40:14.616 ioengine=libaio 00:40:14.616 direct=1 00:40:14.616 bs=4096 00:40:14.616 iodepth=128 00:40:14.616 norandommap=0 00:40:14.616 numjobs=1 00:40:14.616 00:40:14.616 verify_dump=1 00:40:14.616 verify_backlog=512 00:40:14.616 verify_state_save=0 00:40:14.616 do_verify=1 00:40:14.616 verify=crc32c-intel 00:40:14.616 [job0] 00:40:14.616 filename=/dev/nvme0n1 00:40:14.616 [job1] 00:40:14.616 filename=/dev/nvme0n2 00:40:14.616 [job2] 00:40:14.616 filename=/dev/nvme0n3 00:40:14.616 [job3] 00:40:14.616 filename=/dev/nvme0n4 00:40:14.616 Could not set queue depth (nvme0n1) 00:40:14.616 Could not set queue depth (nvme0n2) 00:40:14.616 Could not set queue depth (nvme0n3) 00:40:14.616 Could not set queue depth (nvme0n4) 00:40:15.198 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:15.198 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:15.198 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:15.198 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:15.198 fio-3.35 00:40:15.198 Starting 4 threads 00:40:16.585 00:40:16.585 job0: (groupid=0, jobs=1): err= 0: pid=2295457: Wed Nov 20 08:37:20 2024 00:40:16.585 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:40:16.585 slat (nsec): min=933, max=12064k, avg=81609.47, stdev=630742.26 00:40:16.585 clat (usec): min=2444, max=40181, avg=10535.79, stdev=4670.23 00:40:16.585 lat (usec): min=2449, max=40187, avg=10617.40, stdev=4716.48 00:40:16.585 clat percentiles (usec): 00:40:16.585 | 1.00th=[ 4752], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 7111], 00:40:16.585 | 30.00th=[ 7767], 40.00th=[ 8291], 50.00th=[ 8979], 60.00th=[ 9765], 00:40:16.585 | 70.00th=[12125], 80.00th=[13829], 90.00th=[16188], 95.00th=[20055], 00:40:16.585 | 99.00th=[27919], 99.50th=[28443], 99.90th=[39584], 99.95th=[39584], 00:40:16.585 | 99.99th=[40109] 00:40:16.585 write: IOPS=6161, BW=24.1MiB/s (25.2MB/s)(24.2MiB/1006msec); 0 zone resets 00:40:16.585 slat (nsec): min=1567, max=18631k, avg=75121.13, stdev=608156.54 00:40:16.585 clat (usec): min=1127, max=42762, avg=10141.77, stdev=5869.23 00:40:16.585 lat (usec): min=1138, max=42764, avg=10216.89, stdev=5913.31 00:40:16.585 clat percentiles (usec): 00:40:16.585 | 1.00th=[ 2671], 5.00th=[ 4555], 10.00th=[ 5014], 20.00th=[ 6521], 00:40:16.585 | 30.00th=[ 7177], 40.00th=[ 7570], 50.00th=[ 8291], 60.00th=[ 9503], 00:40:16.585 | 70.00th=[10945], 80.00th=[13829], 90.00th=[15795], 95.00th=[19792], 00:40:16.585 | 99.00th=[36963], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:40:16.585 | 99.99th=[42730] 00:40:16.585 bw ( KiB/s): min=20536, max=28672, per=24.59%, avg=24604.00, stdev=5753.02, samples=2 00:40:16.585 iops : min= 5134, max= 7168, avg=6151.00, stdev=1438.26, samples=2 00:40:16.585 lat (msec) : 2=0.23%, 4=1.35%, 10=59.90%, 20=33.50%, 50=5.02% 00:40:16.585 cpu : usr=5.07%, sys=5.67%, ctx=390, majf=0, minf=1 00:40:16.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:16.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:16.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:16.585 issued rwts: total=6144,6198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:16.585 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:16.585 job1: (groupid=0, jobs=1): err= 0: pid=2295458: Wed Nov 20 08:37:20 2024 00:40:16.585 read: IOPS=6993, BW=27.3MiB/s (28.6MB/s)(28.5MiB/1043msec) 00:40:16.585 slat (nsec): min=914, max=20997k, avg=62173.67, stdev=489385.84 00:40:16.585 clat (usec): min=3356, max=51876, avg=8966.22, stdev=5918.72 00:40:16.585 lat (usec): min=3361, max=51879, avg=9028.39, stdev=5933.81 00:40:16.585 clat percentiles (usec): 00:40:16.585 | 1.00th=[ 4015], 5.00th=[ 5473], 10.00th=[ 5866], 20.00th=[ 6521], 00:40:16.585 | 30.00th=[ 6849], 40.00th=[ 7177], 50.00th=[ 7439], 60.00th=[ 7898], 00:40:16.585 | 70.00th=[ 8717], 80.00th=[10290], 90.00th=[11863], 95.00th=[14746], 00:40:16.585 | 99.00th=[45351], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:40:16.585 | 99.99th=[51643] 00:40:16.585 write: IOPS=7363, BW=28.8MiB/s (30.2MB/s)(30.0MiB/1043msec); 0 zone resets 00:40:16.585 slat (nsec): min=1484, max=10953k, avg=66554.81, stdev=454979.60 00:40:16.585 clat (usec): min=1121, max=84897, avg=8709.08, stdev=8538.74 00:40:16.585 lat (usec): min=1133, max=84906, avg=8775.63, stdev=8593.71 00:40:16.585 clat percentiles (usec): 00:40:16.585 | 1.00th=[ 3523], 5.00th=[ 4146], 10.00th=[ 4752], 20.00th=[ 6128], 00:40:16.585 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7242], 00:40:16.585 | 70.00th=[ 7701], 80.00th=[ 8455], 90.00th=[10421], 95.00th=[16909], 00:40:16.585 | 99.00th=[61604], 99.50th=[71828], 99.90th=[80217], 99.95th=[84411], 00:40:16.585 | 99.99th=[84411] 00:40:16.585 bw ( KiB/s): min=27824, max=33608, per=30.70%, avg=30716.00, stdev=4089.91, samples=2 00:40:16.585 iops : min= 6956, max= 8402, avg=7679.00, stdev=1022.48, samples=2 00:40:16.585 lat (msec) : 2=0.09%, 4=2.28%, 10=81.73%, 20=12.65%, 50=2.19% 00:40:16.585 lat (msec) : 100=1.06% 00:40:16.585 cpu : usr=3.84%, sys=6.43%, ctx=729, majf=0, minf=2 00:40:16.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:40:16.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:16.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:16.585 issued rwts: total=7294,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:16.585 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:16.585 job2: (groupid=0, jobs=1): err= 0: pid=2295459: Wed Nov 20 08:37:20 2024 00:40:16.585 read: IOPS=7641, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec) 00:40:16.586 slat (nsec): min=958, max=8104.2k, avg=68444.07, stdev=538419.03 00:40:16.586 clat (usec): min=2804, max=16857, avg=8702.43, stdev=2188.20 00:40:16.586 lat (usec): min=2808, max=18248, avg=8770.88, stdev=2225.47 00:40:16.586 clat percentiles (usec): 00:40:16.586 | 1.00th=[ 3916], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7111], 00:40:16.586 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8455], 00:40:16.586 | 70.00th=[ 8979], 80.00th=[10159], 90.00th=[12256], 95.00th=[13435], 00:40:16.586 | 99.00th=[14615], 99.50th=[15139], 99.90th=[16712], 99.95th=[16712], 00:40:16.586 | 99.99th=[16909] 00:40:16.586 write: IOPS=7872, BW=30.8MiB/s (32.2MB/s)(30.9MiB/1005msec); 0 zone resets 00:40:16.586 slat (nsec): min=1589, max=7390.6k, avg=55691.41, stdev=334003.91 00:40:16.586 clat (usec): min=1152, max=16759, avg=7670.91, stdev=1709.12 00:40:16.586 lat (usec): min=1161, max=16767, avg=7726.60, stdev=1718.37 00:40:16.586 clat percentiles (usec): 00:40:16.586 | 1.00th=[ 3359], 5.00th=[ 4817], 10.00th=[ 5407], 20.00th=[ 6325], 00:40:16.586 | 30.00th=[ 7308], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8094], 00:40:16.586 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 9241], 95.00th=[10552], 00:40:16.586 | 99.00th=[12780], 99.50th=[13173], 99.90th=[15926], 99.95th=[15926], 00:40:16.586 | 99.99th=[16712] 00:40:16.586 bw ( KiB/s): min=29520, max=32752, per=31.12%, avg=31136.00, stdev=2285.37, samples=2 00:40:16.586 iops : min= 7380, max= 8188, avg=7784.00, stdev=571.34, samples=2 00:40:16.586 lat (msec) : 2=0.15%, 4=1.56%, 10=84.52%, 20=13.76% 00:40:16.586 cpu : usr=4.38%, sys=7.17%, ctx=759, majf=0, minf=2 00:40:16.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:40:16.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:16.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:16.586 issued rwts: total=7680,7912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:16.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:16.586 job3: (groupid=0, jobs=1): err= 0: pid=2295460: Wed Nov 20 08:37:20 2024 00:40:16.586 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:40:16.586 slat (nsec): min=924, max=15277k, avg=106520.88, stdev=875520.92 00:40:16.586 clat (usec): min=3475, max=30335, avg=14241.04, stdev=4738.07 00:40:16.586 lat (usec): min=3482, max=30349, avg=14347.56, stdev=4798.61 00:40:16.586 clat percentiles (usec): 00:40:16.586 | 1.00th=[ 6456], 5.00th=[ 8160], 10.00th=[ 9110], 20.00th=[ 9503], 00:40:16.586 | 30.00th=[10290], 40.00th=[12911], 50.00th=[14091], 60.00th=[15008], 00:40:16.586 | 70.00th=[16581], 80.00th=[18220], 90.00th=[20317], 95.00th=[22152], 00:40:16.586 | 99.00th=[28443], 99.50th=[28705], 99.90th=[28705], 99.95th=[28705], 00:40:16.586 | 99.99th=[30278] 00:40:16.586 write: IOPS=4282, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1004msec); 0 zone resets 00:40:16.586 slat (nsec): min=1566, max=15810k, avg=120570.20, stdev=809673.09 00:40:16.586 clat (usec): min=755, max=67519, avg=16065.47, stdev=12768.94 00:40:16.586 lat (usec): min=763, max=67527, avg=16186.04, stdev=12855.47 00:40:16.586 clat percentiles (usec): 00:40:16.586 | 1.00th=[ 2704], 5.00th=[ 5997], 10.00th=[ 7898], 20.00th=[ 8717], 00:40:16.586 | 30.00th=[ 9372], 40.00th=[10945], 50.00th=[12125], 60.00th=[13304], 00:40:16.586 | 70.00th=[15139], 80.00th=[18482], 90.00th=[31327], 95.00th=[53216], 00:40:16.586 | 99.00th=[63177], 99.50th=[65799], 99.90th=[67634], 99.95th=[67634], 00:40:16.586 | 99.99th=[67634] 00:40:16.586 bw ( KiB/s): min=16192, max=17392, per=16.78%, avg=16792.00, stdev=848.53, samples=2 00:40:16.586 iops : min= 4048, max= 4348, avg=4198.00, stdev=212.13, samples=2 00:40:16.586 lat (usec) : 1000=0.04% 00:40:16.586 lat (msec) : 2=0.27%, 4=0.64%, 10=31.36%, 20=52.76%, 50=12.02% 00:40:16.586 lat (msec) : 100=2.91% 00:40:16.586 cpu : usr=2.39%, sys=4.89%, ctx=338, majf=0, minf=1 00:40:16.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:40:16.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:16.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:16.586 issued rwts: total=4096,4300,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:16.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:16.586 00:40:16.586 Run status group 0 (all jobs): 00:40:16.586 READ: bw=94.4MiB/s (99.0MB/s), 15.9MiB/s-29.9MiB/s (16.7MB/s-31.3MB/s), io=98.5MiB (103MB), run=1004-1043msec 00:40:16.586 WRITE: bw=97.7MiB/s (102MB/s), 16.7MiB/s-30.8MiB/s (17.5MB/s-32.2MB/s), io=102MiB (107MB), run=1004-1043msec 00:40:16.586 00:40:16.586 Disk stats (read/write): 00:40:16.586 nvme0n1: ios=5273/5632, merge=0/0, ticks=48118/51815, in_queue=99933, util=86.77% 00:40:16.586 nvme0n2: ios=5927/6144, merge=0/0, ticks=40581/47836, in_queue=88417, util=96.33% 00:40:16.586 nvme0n3: ios=6186/6655, merge=0/0, ticks=51260/49970, in_queue=101230, util=92.08% 00:40:16.586 nvme0n4: ios=3221/3584, merge=0/0, ticks=40615/57043, in_queue=97658, util=89.42% 00:40:16.586 08:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:40:16.586 08:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2295738 00:40:16.586 08:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:40:16.586 08:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:40:16.586 [global] 00:40:16.586 thread=1 00:40:16.586 invalidate=1 00:40:16.586 rw=read 00:40:16.586 time_based=1 00:40:16.586 runtime=10 00:40:16.586 ioengine=libaio 00:40:16.586 direct=1 00:40:16.586 bs=4096 00:40:16.586 iodepth=1 00:40:16.586 norandommap=1 00:40:16.586 numjobs=1 00:40:16.586 00:40:16.586 [job0] 00:40:16.586 filename=/dev/nvme0n1 00:40:16.586 [job1] 00:40:16.586 filename=/dev/nvme0n2 00:40:16.586 [job2] 00:40:16.586 filename=/dev/nvme0n3 00:40:16.586 [job3] 00:40:16.586 filename=/dev/nvme0n4 00:40:16.586 Could not set queue depth (nvme0n1) 00:40:16.586 Could not set queue depth (nvme0n2) 00:40:16.586 Could not set queue depth (nvme0n3) 00:40:16.586 Could not set queue depth (nvme0n4) 00:40:16.846 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.846 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.846 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.846 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.846 fio-3.35 00:40:16.846 Starting 4 threads 00:40:19.389 08:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:19.389 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10514432, buflen=4096 00:40:19.389 fio: pid=2295987, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:19.389 08:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:19.649 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=274432, buflen=4096 00:40:19.649 fio: pid=2295986, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:19.649 08:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:19.649 08:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:19.909 08:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:19.909 08:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:19.909 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=331776, buflen=4096 00:40:19.909 fio: pid=2295982, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:20.169 08:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:20.169 08:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:20.169 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=319488, buflen=4096 00:40:20.169 fio: pid=2295983, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:20.169 00:40:20.169 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2295982: Wed Nov 20 08:37:24 2024 00:40:20.169 read: IOPS=27, BW=108KiB/s (111kB/s)(324KiB/2999msec) 00:40:20.169 slat (usec): min=5, max=25687, avg=337.80, stdev=2834.03 00:40:20.169 clat (usec): min=576, max=42965, avg=36419.25, stdev=14206.85 00:40:20.169 lat (usec): min=619, max=42990, avg=36761.12, stdev=13680.59 00:40:20.169 clat percentiles (usec): 00:40:20.169 | 1.00th=[ 578], 5.00th=[ 832], 10.00th=[ 873], 20.00th=[41681], 00:40:20.169 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:40:20.169 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:40:20.169 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:40:20.169 | 99.99th=[42730] 00:40:20.169 bw ( KiB/s): min= 96, max= 96, per=2.74%, avg=96.00, stdev= 0.00, samples=5 00:40:20.169 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:40:20.169 lat (usec) : 750=2.44%, 1000=10.98% 00:40:20.169 lat (msec) : 50=85.37% 00:40:20.169 cpu : usr=0.10%, sys=0.00%, ctx=84, majf=0, minf=1 00:40:20.169 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:20.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.169 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.169 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.169 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:20.169 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2295983: Wed Nov 20 08:37:24 2024 00:40:20.169 read: IOPS=24, BW=97.8KiB/s (100kB/s)(312KiB/3189msec) 00:40:20.169 slat (usec): min=25, max=10508, avg=280.20, stdev=1585.70 00:40:20.169 clat (usec): min=1082, max=42070, avg=40303.41, stdev=7884.81 00:40:20.169 lat (usec): min=1120, max=51997, avg=40586.84, stdev=8080.30 00:40:20.169 clat percentiles (usec): 00:40:20.169 | 1.00th=[ 1090], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:40:20.169 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:40:20.169 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:20.169 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:20.169 | 99.99th=[42206] 00:40:20.169 bw ( KiB/s): min= 96, max= 104, per=2.80%, avg=98.33, stdev= 3.67, samples=6 00:40:20.169 iops : min= 24, max= 26, avg=24.50, stdev= 0.84, samples=6 00:40:20.169 lat (msec) : 2=3.80%, 50=94.94% 00:40:20.169 cpu : usr=0.13%, sys=0.00%, ctx=81, majf=0, minf=2 00:40:20.169 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:20.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.169 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.169 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.169 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:20.169 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2295986: Wed Nov 20 08:37:24 2024 00:40:20.169 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(268KiB/2794msec) 00:40:20.169 slat (nsec): min=25720, max=34847, avg=26283.69, stdev=1085.37 00:40:20.169 clat (usec): min=870, max=42132, avg=41328.16, stdev=5020.05 00:40:20.169 lat (usec): min=905, max=42158, avg=41354.44, stdev=5018.99 00:40:20.169 clat percentiles (usec): 00:40:20.169 | 1.00th=[ 873], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:40:20.169 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:40:20.169 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:20.169 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:20.169 | 99.99th=[42206] 00:40:20.169 bw ( KiB/s): min= 96, max= 96, per=2.74%, avg=96.00, stdev= 0.00, samples=5 00:40:20.169 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:40:20.169 lat (usec) : 1000=1.47% 00:40:20.169 lat (msec) : 50=97.06% 00:40:20.169 cpu : usr=0.11%, sys=0.00%, ctx=68, majf=0, minf=2 00:40:20.169 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:20.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.169 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.169 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.169 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:20.169 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2295987: Wed Nov 20 08:37:24 2024 00:40:20.169 read: IOPS=985, BW=3940KiB/s (4035kB/s)(10.0MiB/2606msec) 00:40:20.169 slat (nsec): min=7182, max=61148, avg=25375.24, stdev=2542.14 00:40:20.169 clat (usec): min=556, max=1233, avg=972.68, stdev=68.46 00:40:20.169 lat (usec): min=581, max=1258, avg=998.05, stdev=68.64 00:40:20.169 clat percentiles (usec): 00:40:20.169 | 1.00th=[ 766], 5.00th=[ 840], 10.00th=[ 881], 20.00th=[ 938], 00:40:20.169 | 30.00th=[ 963], 40.00th=[ 971], 50.00th=[ 979], 60.00th=[ 988], 00:40:20.169 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:40:20.169 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1221], 99.95th=[ 1221], 00:40:20.169 | 99.99th=[ 1237] 00:40:20.169 bw ( KiB/s): min= 3936, max= 4080, per=100.00%, avg=3987.20, stdev=55.60, samples=5 00:40:20.169 iops : min= 984, max= 1020, avg=996.80, stdev=13.90, samples=5 00:40:20.169 lat (usec) : 750=0.78%, 1000=69.16% 00:40:20.169 lat (msec) : 2=30.02% 00:40:20.169 cpu : usr=0.92%, sys=3.11%, ctx=2568, majf=0, minf=2 00:40:20.169 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:20.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.169 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.169 issued rwts: total=2568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.169 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:20.169 00:40:20.170 Run status group 0 (all jobs): 00:40:20.170 READ: bw=3503KiB/s (3587kB/s), 95.9KiB/s-3940KiB/s (98.2kB/s-4035kB/s), io=10.9MiB (11.4MB), run=2606-3189msec 00:40:20.170 00:40:20.170 Disk stats (read/write): 00:40:20.170 nvme0n1: ios=76/0, merge=0/0, ticks=2784/0, in_queue=2784, util=93.89% 00:40:20.170 nvme0n2: ios=76/0, merge=0/0, ticks=3061/0, in_queue=3061, util=95.07% 00:40:20.170 nvme0n3: ios=62/0, merge=0/0, ticks=2561/0, in_queue=2561, util=95.99% 00:40:20.170 nvme0n4: ios=2567/0, merge=0/0, ticks=2497/0, in_queue=2497, util=96.42% 00:40:20.170 08:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:20.170 08:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:20.429 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:20.429 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:20.689 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:20.689 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:20.950 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:20.950 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:20.950 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:20.950 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2295738 00:40:20.950 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:20.950 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:20.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:20.950 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:20.950 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:21.211 nvmf hotplug test: fio failed as expected 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:40:21.211 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:40:21.211 rmmod nvme_tcp 00:40:21.473 rmmod nvme_fabrics 00:40:21.473 rmmod nvme_keyring 00:40:21.473 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:40:21.473 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:40:21.473 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:40:21.473 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 2292472 ']' 00:40:21.473 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 2292472 00:40:21.473 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2292472 ']' 00:40:21.473 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2292472 00:40:21.473 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:40:21.473 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:21.473 08:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2292472 00:40:21.473 08:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:21.473 08:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:21.473 08:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2292472' 00:40:21.473 killing process with pid 2292472 00:40:21.473 08:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2292472 00:40:21.473 08:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2292472 00:40:21.473 08:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:40:21.473 08:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:40:21.473 08:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@254 -- # local dev 00:40:21.473 08:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:40:21.473 08:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:21.473 08:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:21.473 08:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@121 -- # return 0 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@274 -- # iptr 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-save 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-restore 00:40:24.021 00:40:24.021 real 0m28.518s 00:40:24.021 user 2m18.596s 00:40:24.021 sys 0m12.189s 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:24.021 ************************************ 00:40:24.021 END TEST nvmf_fio_target 00:40:24.021 ************************************ 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:24.021 ************************************ 00:40:24.021 START TEST nvmf_bdevio 00:40:24.021 ************************************ 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:24.021 * Looking for test storage... 00:40:24.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:24.021 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:24.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:24.022 --rc genhtml_branch_coverage=1 00:40:24.022 --rc genhtml_function_coverage=1 00:40:24.022 --rc genhtml_legend=1 00:40:24.022 --rc geninfo_all_blocks=1 00:40:24.022 --rc geninfo_unexecuted_blocks=1 00:40:24.022 00:40:24.022 ' 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:24.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:24.022 --rc genhtml_branch_coverage=1 00:40:24.022 --rc genhtml_function_coverage=1 00:40:24.022 --rc genhtml_legend=1 00:40:24.022 --rc geninfo_all_blocks=1 00:40:24.022 --rc geninfo_unexecuted_blocks=1 00:40:24.022 00:40:24.022 ' 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:24.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:24.022 --rc genhtml_branch_coverage=1 00:40:24.022 --rc genhtml_function_coverage=1 00:40:24.022 --rc genhtml_legend=1 00:40:24.022 --rc geninfo_all_blocks=1 00:40:24.022 --rc geninfo_unexecuted_blocks=1 00:40:24.022 00:40:24.022 ' 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:24.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:24.022 --rc genhtml_branch_coverage=1 00:40:24.022 --rc genhtml_function_coverage=1 00:40:24.022 --rc genhtml_legend=1 00:40:24.022 --rc geninfo_all_blocks=1 00:40:24.022 --rc geninfo_unexecuted_blocks=1 00:40:24.022 00:40:24.022 ' 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:24.022 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:24.023 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:24.023 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:40:24.023 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:24.023 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:40:24.023 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:40:24.023 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:40:24.023 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:24.023 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:24.023 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:24.023 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:40:24.023 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:40:24.023 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:40:24.023 08:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:32.165 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:32.165 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:32.165 Found net devices under 0000:31:00.0: cvl_0_0 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:32.165 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:32.166 Found net devices under 0000:31:00.1: cvl_0_1 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@247 -- # create_target_ns 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:40:32.166 10.0.0.1 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:40:32.166 10.0.0.2 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:32.166 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:40:32.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:32.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.519 ms 00:40:32.167 00:40:32.167 --- 10.0.0.1 ping statistics --- 00:40:32.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.167 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:40:32.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:32.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:40:32.167 00:40:32.167 --- 10.0.0.2 ping statistics --- 00:40:32.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.167 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:32.167 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:40:32.168 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:40:32.168 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:32.168 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:32.168 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:40:32.168 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:40:32.168 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:40:32.168 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:40:32.168 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:40:32.168 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:32.168 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:40:32.168 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:40:32.168 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:32.168 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:40:32.168 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:40:32.428 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:32.428 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:40:32.428 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:32.428 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:32.428 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=2301426 00:40:32.428 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 2301426 00:40:32.428 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:32.428 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2301426 ']' 00:40:32.428 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:32.428 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:32.428 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:32.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:32.428 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:32.428 08:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:32.428 [2024-11-20 08:37:36.980502] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:32.428 [2024-11-20 08:37:36.981585] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:40:32.428 [2024-11-20 08:37:36.981633] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:32.428 [2024-11-20 08:37:37.086661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:32.428 [2024-11-20 08:37:37.121404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:32.428 [2024-11-20 08:37:37.121436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:32.429 [2024-11-20 08:37:37.121444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:32.429 [2024-11-20 08:37:37.121450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:32.429 [2024-11-20 08:37:37.121456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:32.429 [2024-11-20 08:37:37.122953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:32.429 [2024-11-20 08:37:37.123092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:32.429 [2024-11-20 08:37:37.123216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:32.429 [2024-11-20 08:37:37.123216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:32.689 [2024-11-20 08:37:37.177902] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:32.689 [2024-11-20 08:37:37.179170] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:32.689 [2024-11-20 08:37:37.179489] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:32.689 [2024-11-20 08:37:37.179884] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:32.689 [2024-11-20 08:37:37.179932] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:32.689 [2024-11-20 08:37:37.251974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:32.689 Malloc0 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:32.689 [2024-11-20 08:37:37.340254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:40:32.689 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:40:32.689 { 00:40:32.689 "params": { 00:40:32.689 "name": "Nvme$subsystem", 00:40:32.689 "trtype": "$TEST_TRANSPORT", 00:40:32.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:32.689 "adrfam": "ipv4", 00:40:32.689 "trsvcid": "$NVMF_PORT", 00:40:32.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:32.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:32.689 "hdgst": ${hdgst:-false}, 00:40:32.690 "ddgst": ${ddgst:-false} 00:40:32.690 }, 00:40:32.690 "method": "bdev_nvme_attach_controller" 00:40:32.690 } 00:40:32.690 EOF 00:40:32.690 )") 00:40:32.690 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:40:32.690 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:40:32.690 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:40:32.690 08:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:40:32.690 "params": { 00:40:32.690 "name": "Nvme1", 00:40:32.690 "trtype": "tcp", 00:40:32.690 "traddr": "10.0.0.2", 00:40:32.690 "adrfam": "ipv4", 00:40:32.690 "trsvcid": "4420", 00:40:32.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:32.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:32.690 "hdgst": false, 00:40:32.690 "ddgst": false 00:40:32.690 }, 00:40:32.690 "method": "bdev_nvme_attach_controller" 00:40:32.690 }' 00:40:32.690 [2024-11-20 08:37:37.405659] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:40:32.690 [2024-11-20 08:37:37.405728] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2301627 ] 00:40:32.950 [2024-11-20 08:37:37.487923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:32.950 [2024-11-20 08:37:37.531383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:32.950 [2024-11-20 08:37:37.531503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:32.950 [2024-11-20 08:37:37.531507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:33.211 I/O targets: 00:40:33.211 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:33.211 00:40:33.211 00:40:33.211 CUnit - A unit testing framework for C - Version 2.1-3 00:40:33.211 http://cunit.sourceforge.net/ 00:40:33.211 00:40:33.211 00:40:33.211 Suite: bdevio tests on: Nvme1n1 00:40:33.211 Test: blockdev write read block ...passed 00:40:33.211 Test: blockdev write zeroes read block ...passed 00:40:33.211 Test: blockdev write zeroes read no split ...passed 00:40:33.211 Test: blockdev write zeroes read split ...passed 00:40:33.211 Test: blockdev write zeroes read split partial ...passed 00:40:33.211 Test: blockdev reset ...[2024-11-20 08:37:37.879198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:33.211 [2024-11-20 08:37:37.879264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ed4b0 (9): Bad file descriptor 00:40:33.211 [2024-11-20 08:37:37.886057] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:40:33.211 passed 00:40:33.211 Test: blockdev write read 8 blocks ...passed 00:40:33.211 Test: blockdev write read size > 128k ...passed 00:40:33.211 Test: blockdev write read invalid size ...passed 00:40:33.472 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:33.472 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:33.472 Test: blockdev write read max offset ...passed 00:40:33.472 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:33.472 Test: blockdev writev readv 8 blocks ...passed 00:40:33.472 Test: blockdev writev readv 30 x 1block ...passed 00:40:33.472 Test: blockdev writev readv block ...passed 00:40:33.472 Test: blockdev writev readv size > 128k ...passed 00:40:33.472 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:33.472 Test: blockdev comparev and writev ...[2024-11-20 08:37:38.152197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:33.472 [2024-11-20 08:37:38.152222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.472 [2024-11-20 08:37:38.152233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:33.472 [2024-11-20 08:37:38.152243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:33.472 [2024-11-20 08:37:38.152719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:33.472 [2024-11-20 08:37:38.152728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:33.472 [2024-11-20 08:37:38.152737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:33.472 [2024-11-20 08:37:38.152743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:33.472 [2024-11-20 08:37:38.153324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:33.472 [2024-11-20 08:37:38.153332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:33.472 [2024-11-20 08:37:38.153342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:33.472 [2024-11-20 08:37:38.153347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:33.472 [2024-11-20 08:37:38.153872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:33.472 [2024-11-20 08:37:38.153880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:33.472 [2024-11-20 08:37:38.153890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:33.472 [2024-11-20 08:37:38.153895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:33.472 passed 00:40:33.733 Test: blockdev nvme passthru rw ...passed 00:40:33.733 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:37:38.238764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:33.733 [2024-11-20 08:37:38.238774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:33.733 [2024-11-20 08:37:38.239132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:33.733 [2024-11-20 08:37:38.239139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:33.733 [2024-11-20 08:37:38.239489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:33.733 [2024-11-20 08:37:38.239497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:33.733 [2024-11-20 08:37:38.239845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:33.733 [2024-11-20 08:37:38.239852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:33.733 passed 00:40:33.733 Test: blockdev nvme admin passthru ...passed 00:40:33.733 Test: blockdev copy ...passed 00:40:33.733 00:40:33.733 Run Summary: Type Total Ran Passed Failed Inactive 00:40:33.733 suites 1 1 n/a 0 0 00:40:33.733 tests 23 23 23 0 0 00:40:33.733 asserts 152 152 152 0 n/a 00:40:33.733 00:40:33.733 Elapsed time = 1.169 seconds 00:40:33.733 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:33.733 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.733 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:33.733 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.733 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:33.733 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:33.733 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:40:33.733 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:40:33.733 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:40:33.733 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:40:33.733 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:40:33.733 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:40:33.733 rmmod nvme_tcp 00:40:33.733 rmmod nvme_fabrics 00:40:33.733 rmmod nvme_keyring 00:40:33.994 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:40:33.994 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:40:33.994 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:40:33.994 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 2301426 ']' 00:40:33.994 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 2301426 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2301426 ']' 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2301426 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2301426 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2301426' 00:40:33.995 killing process with pid 2301426 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2301426 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2301426 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@254 -- # local dev 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@257 -- # remove_target_ns 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:33.995 08:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@258 -- # delete_main_bridge 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@121 -- # return 0 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@274 -- # iptr 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-save 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-restore 00:40:36.541 00:40:36.541 real 0m12.471s 00:40:36.541 user 0m9.425s 00:40:36.541 sys 0m6.966s 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:36.541 ************************************ 00:40:36.541 END TEST nvmf_bdevio 00:40:36.541 ************************************ 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # [[ tcp == \t\c\p ]] 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:36.541 ************************************ 00:40:36.541 START TEST nvmf_target_multipath 00:40:36.541 ************************************ 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:36.541 * Looking for test storage... 00:40:36.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:36.541 08:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:36.541 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:36.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.541 --rc genhtml_branch_coverage=1 00:40:36.542 --rc genhtml_function_coverage=1 00:40:36.542 --rc genhtml_legend=1 00:40:36.542 --rc geninfo_all_blocks=1 00:40:36.542 --rc geninfo_unexecuted_blocks=1 00:40:36.542 00:40:36.542 ' 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:36.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.542 --rc genhtml_branch_coverage=1 00:40:36.542 --rc genhtml_function_coverage=1 00:40:36.542 --rc genhtml_legend=1 00:40:36.542 --rc geninfo_all_blocks=1 00:40:36.542 --rc geninfo_unexecuted_blocks=1 00:40:36.542 00:40:36.542 ' 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:36.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.542 --rc genhtml_branch_coverage=1 00:40:36.542 --rc genhtml_function_coverage=1 00:40:36.542 --rc genhtml_legend=1 00:40:36.542 --rc geninfo_all_blocks=1 00:40:36.542 --rc geninfo_unexecuted_blocks=1 00:40:36.542 00:40:36.542 ' 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:36.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.542 --rc genhtml_branch_coverage=1 00:40:36.542 --rc genhtml_function_coverage=1 00:40:36.542 --rc genhtml_legend=1 00:40:36.542 --rc geninfo_all_blocks=1 00:40:36.542 --rc geninfo_unexecuted_blocks=1 00:40:36.542 00:40:36.542 ' 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@50 -- # : 0 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # xtrace_disable 00:40:36.542 08:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@131 -- # pci_devs=() 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@131 -- # local -a pci_devs 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@132 -- # pci_net_devs=() 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@133 -- # pci_drivers=() 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@133 -- # local -A pci_drivers 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@135 -- # net_devs=() 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@135 -- # local -ga net_devs 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@136 -- # e810=() 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@136 -- # local -ga e810 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@137 -- # x722=() 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@137 -- # local -ga x722 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@138 -- # mlx=() 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@138 -- # local -ga mlx 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:44.681 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:44.681 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:44.681 Found net devices under 0000:31:00.0: cvl_0_0 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:44.681 Found net devices under 0000:31:00.1: cvl_0_1 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # is_hw=yes 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@247 -- # create_target_ns 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:40:44.681 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:40:44.682 10.0.0.1 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:40:44.682 10.0.0.2 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:40:44.682 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@38 -- # ping_ips 1 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:40:44.945 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:40:44.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:44.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.457 ms 00:40:44.946 00:40:44.946 --- 10.0.0.1 ping statistics --- 00:40:44.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:44.946 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:40:44.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:44.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:40:44.946 00:40:44.946 --- 10.0.0.2 ping statistics --- 00:40:44.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:44.946 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@270 -- # return 0 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:44.946 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:40:44.947 only one NIC for nvmf test 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:40:44.947 rmmod nvme_tcp 00:40:44.947 rmmod nvme_fabrics 00:40:44.947 rmmod nvme_keyring 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:44.947 08:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:40:47.490 00:40:47.490 real 0m10.903s 00:40:47.490 user 0m2.445s 00:40:47.490 sys 0m6.412s 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:47.490 ************************************ 00:40:47.490 END TEST nvmf_target_multipath 00:40:47.490 ************************************ 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:47.490 ************************************ 00:40:47.490 START TEST nvmf_zcopy 00:40:47.490 ************************************ 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:47.490 * Looking for test storage... 00:40:47.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:40:47.490 08:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:47.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.490 --rc genhtml_branch_coverage=1 00:40:47.490 --rc genhtml_function_coverage=1 00:40:47.490 --rc genhtml_legend=1 00:40:47.490 --rc geninfo_all_blocks=1 00:40:47.490 --rc geninfo_unexecuted_blocks=1 00:40:47.490 00:40:47.490 ' 00:40:47.490 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:47.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.490 --rc genhtml_branch_coverage=1 00:40:47.490 --rc genhtml_function_coverage=1 00:40:47.490 --rc genhtml_legend=1 00:40:47.491 --rc geninfo_all_blocks=1 00:40:47.491 --rc geninfo_unexecuted_blocks=1 00:40:47.491 00:40:47.491 ' 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.491 --rc genhtml_branch_coverage=1 00:40:47.491 --rc genhtml_function_coverage=1 00:40:47.491 --rc genhtml_legend=1 00:40:47.491 --rc geninfo_all_blocks=1 00:40:47.491 --rc geninfo_unexecuted_blocks=1 00:40:47.491 00:40:47.491 ' 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.491 --rc genhtml_branch_coverage=1 00:40:47.491 --rc genhtml_function_coverage=1 00:40:47.491 --rc genhtml_legend=1 00:40:47.491 --rc geninfo_all_blocks=1 00:40:47.491 --rc geninfo_unexecuted_blocks=1 00:40:47.491 00:40:47.491 ' 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:40:47.491 08:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:55.634 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:55.635 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:55.635 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:55.635 Found net devices under 0000:31:00.0: cvl_0_0 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:55.635 Found net devices under 0000:31:00.1: cvl_0_1 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:40:55.635 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@247 -- # create_target_ns 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:40:55.636 10.0.0.1 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:40:55.636 10.0.0.2 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:55.636 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:55.637 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:40:55.637 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:40:55.637 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:40:55.637 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:40:55.637 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:40:55.637 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:40:55.637 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:40:55.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:55.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.562 ms 00:40:55.899 00:40:55.899 --- 10.0.0.1 ping statistics --- 00:40:55.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:55.899 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:40:55.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:55.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:40:55.899 00:40:55.899 --- 10.0.0.2 ping statistics --- 00:40:55.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:55.899 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:55.899 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=2311137 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 2311137 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2311137 ']' 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:55.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:55.900 08:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:55.900 [2024-11-20 08:38:00.596944] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:55.900 [2024-11-20 08:38:00.598183] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:40:55.900 [2024-11-20 08:38:00.598241] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:56.161 [2024-11-20 08:38:00.705236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:56.161 [2024-11-20 08:38:00.757024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:56.161 [2024-11-20 08:38:00.757073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:56.161 [2024-11-20 08:38:00.757081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:56.161 [2024-11-20 08:38:00.757089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:56.161 [2024-11-20 08:38:00.757095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:56.161 [2024-11-20 08:38:00.757857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:56.161 [2024-11-20 08:38:00.833998] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:56.161 [2024-11-20 08:38:00.834287] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:56.732 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:56.732 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:40:56.732 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:40:56.732 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:56.732 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:56.732 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:56.732 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:40:56.732 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.732 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:56.732 [2024-11-20 08:38:01.442720] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:56.732 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.732 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:56.732 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.732 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@20 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:56.994 [2024-11-20 08:38:01.470945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:56.994 malloc0 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@28 -- # gen_nvmf_target_json 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:40:56.994 { 00:40:56.994 "params": { 00:40:56.994 "name": "Nvme$subsystem", 00:40:56.994 "trtype": "$TEST_TRANSPORT", 00:40:56.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:56.994 "adrfam": "ipv4", 00:40:56.994 "trsvcid": "$NVMF_PORT", 00:40:56.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:56.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:56.994 "hdgst": ${hdgst:-false}, 00:40:56.994 "ddgst": ${ddgst:-false} 00:40:56.994 }, 00:40:56.994 "method": "bdev_nvme_attach_controller" 00:40:56.994 } 00:40:56.994 EOF 00:40:56.994 )") 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:40:56.994 08:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:40:56.994 "params": { 00:40:56.994 "name": "Nvme1", 00:40:56.994 "trtype": "tcp", 00:40:56.994 "traddr": "10.0.0.2", 00:40:56.994 "adrfam": "ipv4", 00:40:56.994 "trsvcid": "4420", 00:40:56.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:56.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:56.994 "hdgst": false, 00:40:56.994 "ddgst": false 00:40:56.994 }, 00:40:56.994 "method": "bdev_nvme_attach_controller" 00:40:56.994 }' 00:40:56.994 [2024-11-20 08:38:01.570692] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:40:56.994 [2024-11-20 08:38:01.570760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311479 ] 00:40:56.994 [2024-11-20 08:38:01.655365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:56.994 [2024-11-20 08:38:01.697279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:57.255 Running I/O for 10 seconds... 00:40:59.579 6574.00 IOPS, 51.36 MiB/s [2024-11-20T07:38:05.248Z] 6636.50 IOPS, 51.85 MiB/s [2024-11-20T07:38:06.187Z] 6642.67 IOPS, 51.90 MiB/s [2024-11-20T07:38:07.128Z] 6658.00 IOPS, 52.02 MiB/s [2024-11-20T07:38:08.069Z] 6662.60 IOPS, 52.05 MiB/s [2024-11-20T07:38:09.008Z] 6806.83 IOPS, 53.18 MiB/s [2024-11-20T07:38:10.393Z] 7216.00 IOPS, 56.38 MiB/s [2024-11-20T07:38:11.332Z] 7521.50 IOPS, 58.76 MiB/s [2024-11-20T07:38:12.313Z] 7756.11 IOPS, 60.59 MiB/s [2024-11-20T07:38:12.313Z] 7946.00 IOPS, 62.08 MiB/s 00:41:07.584 Latency(us) 00:41:07.584 [2024-11-20T07:38:12.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:07.584 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:07.584 Verification LBA range: start 0x0 length 0x1000 00:41:07.585 Nvme1n1 : 10.01 7949.36 62.10 0.00 0.00 16050.37 1140.05 27743.57 00:41:07.585 [2024-11-20T07:38:12.314Z] =================================================================================================================== 00:41:07.585 [2024-11-20T07:38:12.314Z] Total : 7949.36 62.10 0.00 0.00 16050.37 1140.05 27743.57 00:41:07.585 08:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@34 -- # perfpid=2313356 00:41:07.585 08:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@36 -- # xtrace_disable 00:41:07.585 08:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.585 08:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:07.585 08:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@32 -- # gen_nvmf_target_json 00:41:07.585 08:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:41:07.585 08:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:41:07.585 08:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:41:07.585 08:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:41:07.585 { 00:41:07.585 "params": { 00:41:07.585 "name": "Nvme$subsystem", 00:41:07.585 "trtype": "$TEST_TRANSPORT", 00:41:07.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:07.585 "adrfam": "ipv4", 00:41:07.585 "trsvcid": "$NVMF_PORT", 00:41:07.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:07.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:07.585 "hdgst": ${hdgst:-false}, 00:41:07.585 "ddgst": ${ddgst:-false} 00:41:07.585 }, 00:41:07.585 "method": "bdev_nvme_attach_controller" 00:41:07.585 } 00:41:07.585 EOF 00:41:07.585 )") 00:41:07.585 08:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:41:07.585 [2024-11-20 08:38:12.118264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.585 [2024-11-20 08:38:12.118292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.585 08:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:41:07.585 08:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:41:07.585 08:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:41:07.585 "params": { 00:41:07.585 "name": "Nvme1", 00:41:07.585 "trtype": "tcp", 00:41:07.585 "traddr": "10.0.0.2", 00:41:07.585 "adrfam": "ipv4", 00:41:07.585 "trsvcid": "4420", 00:41:07.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:07.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:07.585 "hdgst": false, 00:41:07.585 "ddgst": false 00:41:07.585 }, 00:41:07.585 "method": "bdev_nvme_attach_controller" 00:41:07.585 }' 00:41:07.585 [2024-11-20 08:38:12.130237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.585 [2024-11-20 08:38:12.130245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.585 [2024-11-20 08:38:12.142234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.585 [2024-11-20 08:38:12.142242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.585 [2024-11-20 08:38:12.154234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.585 [2024-11-20 08:38:12.154242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.585 [2024-11-20 08:38:12.163101] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:41:07.585 [2024-11-20 08:38:12.163150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313356 ] 00:41:07.585 [2024-11-20 08:38:12.166234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.585 [2024-11-20 08:38:12.166242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.585 [2024-11-20 08:38:12.178233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.585 [2024-11-20 08:38:12.178241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.585 [2024-11-20 08:38:12.190233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.585 [2024-11-20 08:38:12.190241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.585 [2024-11-20 08:38:12.202234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.585 [2024-11-20 08:38:12.202242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.585 [2024-11-20 08:38:12.214233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.585 [2024-11-20 08:38:12.214240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.585 [2024-11-20 08:38:12.226233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.585 [2024-11-20 08:38:12.226241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.585 [2024-11-20 08:38:12.238234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.585 [2024-11-20 08:38:12.238241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.585 [2024-11-20 08:38:12.239643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.585 [2024-11-20 08:38:12.250236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.585 [2024-11-20 08:38:12.250247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.585 [2024-11-20 08:38:12.262234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.585 [2024-11-20 08:38:12.262242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.274234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.846 [2024-11-20 08:38:12.274245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.275264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:07.846 [2024-11-20 08:38:12.286235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.846 [2024-11-20 08:38:12.286244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.298237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.846 [2024-11-20 08:38:12.298251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.310236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.846 [2024-11-20 08:38:12.310248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.322234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.846 [2024-11-20 08:38:12.322243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.334234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.846 [2024-11-20 08:38:12.334242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.346242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.846 [2024-11-20 08:38:12.346258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.358238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.846 [2024-11-20 08:38:12.358249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.370237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.846 [2024-11-20 08:38:12.370246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.382235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.846 [2024-11-20 08:38:12.382244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.394234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.846 [2024-11-20 08:38:12.394241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.406234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.846 [2024-11-20 08:38:12.406242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.418235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.846 [2024-11-20 08:38:12.418243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.430235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.846 [2024-11-20 08:38:12.430244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.442234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.846 [2024-11-20 08:38:12.442241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.846 [2024-11-20 08:38:12.454234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.847 [2024-11-20 08:38:12.454240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.847 [2024-11-20 08:38:12.466234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.847 [2024-11-20 08:38:12.466242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.847 [2024-11-20 08:38:12.478235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.847 [2024-11-20 08:38:12.478243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.847 [2024-11-20 08:38:12.490233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.847 [2024-11-20 08:38:12.490240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.847 [2024-11-20 08:38:12.502233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.847 [2024-11-20 08:38:12.502241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.847 [2024-11-20 08:38:12.514240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.847 [2024-11-20 08:38:12.514249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.847 [2024-11-20 08:38:12.526234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.847 [2024-11-20 08:38:12.526240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.847 [2024-11-20 08:38:12.538234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.847 [2024-11-20 08:38:12.538240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.847 [2024-11-20 08:38:12.550234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.847 [2024-11-20 08:38:12.550240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.847 [2024-11-20 08:38:12.562236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.847 [2024-11-20 08:38:12.562247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.108 [2024-11-20 08:38:12.606350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.108 [2024-11-20 08:38:12.606362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.108 [2024-11-20 08:38:12.618242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.108 [2024-11-20 08:38:12.618254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.108 Running I/O for 5 seconds... 00:41:08.108 [2024-11-20 08:38:12.634241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.108 [2024-11-20 08:38:12.634257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.108 [2024-11-20 08:38:12.647131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.108 [2024-11-20 08:38:12.647146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.108 [2024-11-20 08:38:12.661170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.108 [2024-11-20 08:38:12.661186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.108 [2024-11-20 08:38:12.674507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.108 [2024-11-20 08:38:12.674522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.108 [2024-11-20 08:38:12.689281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.109 [2024-11-20 08:38:12.689295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.109 [2024-11-20 08:38:12.702311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.109 [2024-11-20 08:38:12.702327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.109 [2024-11-20 08:38:12.715170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.109 [2024-11-20 08:38:12.715184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.109 [2024-11-20 08:38:12.729295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.109 [2024-11-20 08:38:12.729310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.109 [2024-11-20 08:38:12.742627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.109 [2024-11-20 08:38:12.742641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.109 [2024-11-20 08:38:12.757804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.109 [2024-11-20 08:38:12.757818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.109 [2024-11-20 08:38:12.770742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.109 [2024-11-20 08:38:12.770755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.109 [2024-11-20 08:38:12.785784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.109 [2024-11-20 08:38:12.785799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.109 [2024-11-20 08:38:12.798870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.109 [2024-11-20 08:38:12.798884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.109 [2024-11-20 08:38:12.813192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.109 [2024-11-20 08:38:12.813207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.109 [2024-11-20 08:38:12.826215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.109 [2024-11-20 08:38:12.826230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.370 [2024-11-20 08:38:12.839224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.370 [2024-11-20 08:38:12.839239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.370 [2024-11-20 08:38:12.853657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.370 [2024-11-20 08:38:12.853672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.370 [2024-11-20 08:38:12.866305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.370 [2024-11-20 08:38:12.866321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.370 [2024-11-20 08:38:12.879263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.370 [2024-11-20 08:38:12.879278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.370 [2024-11-20 08:38:12.893455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.370 [2024-11-20 08:38:12.893469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.370 [2024-11-20 08:38:12.906479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.370 [2024-11-20 08:38:12.906494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.370 [2024-11-20 08:38:12.919376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.370 [2024-11-20 08:38:12.919390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.370 [2024-11-20 08:38:12.933416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.370 [2024-11-20 08:38:12.933431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.370 [2024-11-20 08:38:12.946120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.370 [2024-11-20 08:38:12.946134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.371 [2024-11-20 08:38:12.958668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.371 [2024-11-20 08:38:12.958682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.371 [2024-11-20 08:38:12.973536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.371 [2024-11-20 08:38:12.973555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.371 [2024-11-20 08:38:12.986597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.371 [2024-11-20 08:38:12.986611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.371 [2024-11-20 08:38:13.001478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.371 [2024-11-20 08:38:13.001493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.371 [2024-11-20 08:38:13.014370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.371 [2024-11-20 08:38:13.014385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.371 [2024-11-20 08:38:13.027213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.371 [2024-11-20 08:38:13.027227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.371 [2024-11-20 08:38:13.041967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.371 [2024-11-20 08:38:13.041981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.371 [2024-11-20 08:38:13.054786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.371 [2024-11-20 08:38:13.054800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.371 [2024-11-20 08:38:13.069407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.371 [2024-11-20 08:38:13.069422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.371 [2024-11-20 08:38:13.082646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.371 [2024-11-20 08:38:13.082660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.096856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.096875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.110144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.110159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.123093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.123107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.137588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.137602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.150259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.150273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.163066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.163080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.177478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.177492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.190722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.190737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.205381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.205396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.218377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.218392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.231525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.231544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.245393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.245409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.258582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.258597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.273502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.273516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.286609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.286623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.301072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.301087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.314245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.631 [2024-11-20 08:38:13.314259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.631 [2024-11-20 08:38:13.327379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.632 [2024-11-20 08:38:13.327393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.632 [2024-11-20 08:38:13.341618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.632 [2024-11-20 08:38:13.341633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.632 [2024-11-20 08:38:13.354845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.632 [2024-11-20 08:38:13.354859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.369606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.369621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.382777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.382791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.397358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.397373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.410182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.410197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.423142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.423156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.437469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.437483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.450516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.450530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.465095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.465110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.477901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.477916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.490751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.490769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.505109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.505123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.518033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.518047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.531379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.531394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.545520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.545535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.558722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.558736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.573442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.573457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.586414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.586430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.599388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.599402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.897 [2024-11-20 08:38:13.613752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.897 [2024-11-20 08:38:13.613767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 18998.00 IOPS, 148.42 MiB/s [2024-11-20T07:38:14.000Z] [2024-11-20 08:38:13.626646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.626661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.641541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.641556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.654848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.654866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.668964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.668979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.681929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.681944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.694409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.694424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.707197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.707212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.721508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.721524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.734391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.734406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.747370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.747388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.761485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.761499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.774667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.774681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.789288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.789303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.802631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.802645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.817525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.817540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.830502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.830516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.845587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.845602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.858468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.858482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.871820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.871834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.886239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.886254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.898988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.899004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.913934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.913950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.927395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.927410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.941299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.941313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.954468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.954483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.967225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.967240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.981490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.981505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.271 [2024-11-20 08:38:13.994317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.271 [2024-11-20 08:38:13.994333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.007669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.007684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.021195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.021210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.033814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.033830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.047146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.047160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.061202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.061217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.074467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.074481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.087361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.087375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.101754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.101769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.114710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.114725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.129204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.129219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.142483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.142498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.155472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.155487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.169071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.169086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.182368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.182383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.195342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.195357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.209532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.209547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.222500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.222515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.235355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.235369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.249220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.249234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.564 [2024-11-20 08:38:14.262602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.564 [2024-11-20 08:38:14.262616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.277405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.277420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.290742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.290757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.305460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.305475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.318731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.318745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.333549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.333564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.346527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.346541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.360844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.360859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.374192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.374207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.387361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.387376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.401222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.401237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.414572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.414586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.429426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.429440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.442668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.442682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.457705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.457720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.470802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.470815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.485467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.485482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.498162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.498177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.511245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.511259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.525689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.525703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.539073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.539088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.839 [2024-11-20 08:38:14.553410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.839 [2024-11-20 08:38:14.553423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.099 [2024-11-20 08:38:14.566618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.099 [2024-11-20 08:38:14.566633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.099 [2024-11-20 08:38:14.581252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.099 [2024-11-20 08:38:14.581266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.099 [2024-11-20 08:38:14.594187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.099 [2024-11-20 08:38:14.594202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.099 [2024-11-20 08:38:14.606890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.099 [2024-11-20 08:38:14.606904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.099 [2024-11-20 08:38:14.621895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.099 [2024-11-20 08:38:14.621910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.099 18984.50 IOPS, 148.32 MiB/s [2024-11-20T07:38:14.828Z] [2024-11-20 08:38:14.635676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.099 [2024-11-20 08:38:14.635691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.099 [2024-11-20 08:38:14.649743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.099 [2024-11-20 08:38:14.649758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.099 [2024-11-20 08:38:14.662708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.099 [2024-11-20 08:38:14.662722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.100 [2024-11-20 08:38:14.677124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.100 [2024-11-20 08:38:14.677138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.100 [2024-11-20 08:38:14.690013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.100 [2024-11-20 08:38:14.690028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.100 [2024-11-20 08:38:14.703124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.100 [2024-11-20 08:38:14.703138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.100 [2024-11-20 08:38:14.717589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.100 [2024-11-20 08:38:14.717603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.100 [2024-11-20 08:38:14.730360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.100 [2024-11-20 08:38:14.730374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.100 [2024-11-20 08:38:14.743102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.100 [2024-11-20 08:38:14.743116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.100 [2024-11-20 08:38:14.757265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.100 [2024-11-20 08:38:14.757279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.100 [2024-11-20 08:38:14.770479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.100 [2024-11-20 08:38:14.770497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.100 [2024-11-20 08:38:14.783291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.100 [2024-11-20 08:38:14.783305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.100 [2024-11-20 08:38:14.796947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.100 [2024-11-20 08:38:14.796962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.100 [2024-11-20 08:38:14.810272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.100 [2024-11-20 08:38:14.810288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.100 [2024-11-20 08:38:14.823390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.100 [2024-11-20 08:38:14.823405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:14.837307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:14.837322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:14.850252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:14.850267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:14.862979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:14.862993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:14.877509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:14.877524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:14.890573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:14.890587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:14.905281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:14.905295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:14.918282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:14.918297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:14.931371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:14.931386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:14.945744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:14.945758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:14.958838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:14.958852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:14.973248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:14.973262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:14.986351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:14.986366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:14.999090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:14.999104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:15.013302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:15.013316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:15.026381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:15.026399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:15.038959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:15.038974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:15.053260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:15.053274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:15.066181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:15.066197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.361 [2024-11-20 08:38:15.079453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.361 [2024-11-20 08:38:15.079467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.093161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.093176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.105976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.105990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.118913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.118927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.133154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.133168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.146298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.146312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.159638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.159653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.173380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.173394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.186666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.186680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.201246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.201260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.214612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.214626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.229372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.229386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.242472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.242487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.254758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.254773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.269026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.269041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.281854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.281877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.295129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.295145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.309378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.309392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.322295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.622 [2024-11-20 08:38:15.322310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.622 [2024-11-20 08:38:15.335000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.623 [2024-11-20 08:38:15.335014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.349068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.349083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.362390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.362404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.375044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.375058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.389446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.389461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.402706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.402720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.417242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.417256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.430235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.430250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.443142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.443156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.457888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.457902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.470880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.470895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.485468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.485483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.498843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.498858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.513542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.513557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.526825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.526840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.541209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.541224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.554324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.554338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.567560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.567574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.581470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.581484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.594615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.594629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.884 [2024-11-20 08:38:15.609304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.884 [2024-11-20 08:38:15.609319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.622127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.622142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 19019.00 IOPS, 148.59 MiB/s [2024-11-20T07:38:15.874Z] [2024-11-20 08:38:15.635030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.635045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.649418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.649433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.662612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.662625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.676975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.676990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.689963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.689978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.702762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.702776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.717889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.717904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.730848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.730866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.745529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.745544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.758669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.758684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.773098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.773113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.786018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.786033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.798891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.798906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.813394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.813409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.826235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.826250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.839200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.839215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.853362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.853377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.145 [2024-11-20 08:38:15.866350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.145 [2024-11-20 08:38:15.866365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:15.879042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:15.879057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:15.893249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:15.893263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:15.906476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:15.906491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:15.919032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:15.919046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:15.933046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:15.933060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:15.946260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:15.946275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:15.959197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:15.959212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:15.973552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:15.973567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:15.986738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:15.986752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:16.001149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:16.001164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:16.014140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:16.014155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:16.027063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:16.027077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:16.041566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:16.041581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:16.054462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:16.054477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:16.067256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:16.067270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:16.081422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:16.081437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:16.094472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:16.094486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:16.107098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:16.107113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.407 [2024-11-20 08:38:16.121665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.407 [2024-11-20 08:38:16.121680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.134817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.134832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.148786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.148801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.162033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.162048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.175295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.175310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.189790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.189804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.202776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.202790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.217805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.217819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.230572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.230586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.245493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.245507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.258350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.258364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.271473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.271488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.285482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.285498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.298455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.298475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.311250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.311265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.325526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.325540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.338741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.338755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.353696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.353711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.366660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.366673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.668 [2024-11-20 08:38:16.381404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.668 [2024-11-20 08:38:16.381418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.394328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.394343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.407026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.407041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.421165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.421180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.434108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.434123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.447069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.447083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.461008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.461022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.473877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.473892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.486614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.486628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.501205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.501220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.514330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.514344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.527012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.527026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.541761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.541775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.554724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.554745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.569393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.569407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.582429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.582444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.595260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.595274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.609228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.609243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.622207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.622221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 19044.75 IOPS, 148.79 MiB/s [2024-11-20T07:38:16.657Z] [2024-11-20 08:38:16.634879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.634893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.928 [2024-11-20 08:38:16.649392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.928 [2024-11-20 08:38:16.649407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.662267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.662282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.675269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.675284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.689266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.689280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.702220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.702235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.715113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.715127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.729264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.729278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.742451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.742466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.755269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.755283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.769122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.769137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.782120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.782135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.794969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.794983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.809694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.809712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.822941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.822955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.837389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.837404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.850077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.850092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.862925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.862939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.877619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.877633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.890831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.890845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.188 [2024-11-20 08:38:16.905275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.188 [2024-11-20 08:38:16.905290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.448 [2024-11-20 08:38:16.918064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.448 [2024-11-20 08:38:16.918079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.448 [2024-11-20 08:38:16.931419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.448 [2024-11-20 08:38:16.931434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.448 [2024-11-20 08:38:16.945283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.448 [2024-11-20 08:38:16.945297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.448 [2024-11-20 08:38:16.958193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.448 [2024-11-20 08:38:16.958208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.448 [2024-11-20 08:38:16.970821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.448 [2024-11-20 08:38:16.970835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.448 [2024-11-20 08:38:16.985525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.448 [2024-11-20 08:38:16.985541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.448 [2024-11-20 08:38:16.998449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.448 [2024-11-20 08:38:16.998465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.448 [2024-11-20 08:38:17.011419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.448 [2024-11-20 08:38:17.011434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.448 [2024-11-20 08:38:17.025466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.448 [2024-11-20 08:38:17.025480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.448 [2024-11-20 08:38:17.038227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.448 [2024-11-20 08:38:17.038242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.448 [2024-11-20 08:38:17.051329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.448 [2024-11-20 08:38:17.051343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.448 [2024-11-20 08:38:17.065521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.448 [2024-11-20 08:38:17.065537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.448 [2024-11-20 08:38:17.078333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.448 [2024-11-20 08:38:17.078348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.448 [2024-11-20 08:38:17.091199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.449 [2024-11-20 08:38:17.091214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.449 [2024-11-20 08:38:17.105556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.449 [2024-11-20 08:38:17.105571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.449 [2024-11-20 08:38:17.118507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.449 [2024-11-20 08:38:17.118521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.449 [2024-11-20 08:38:17.133536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.449 [2024-11-20 08:38:17.133551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.449 [2024-11-20 08:38:17.146677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.449 [2024-11-20 08:38:17.146692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.449 [2024-11-20 08:38:17.161783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.449 [2024-11-20 08:38:17.161797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.449 [2024-11-20 08:38:17.174673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.449 [2024-11-20 08:38:17.174688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.708 [2024-11-20 08:38:17.189375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.708 [2024-11-20 08:38:17.189391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.202195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.202210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.215179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.215195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.229905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.229920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.243148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.243162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.257287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.257302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.270079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.270094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.282942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.282957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.297363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.297378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.310281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.310296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.323461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.323476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.337253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.337268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.350409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.350423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.363384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.363399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.377378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.377392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.390256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.390270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.403501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.403516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.417468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.417483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.709 [2024-11-20 08:38:17.430587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.709 [2024-11-20 08:38:17.430602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.445634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.445650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.458925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.458940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.473653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.473668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.486879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.486894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.501828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.501842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.515047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.515062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.529348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.529363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.542284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.542299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.554898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.554913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.569173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.569188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.581933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.581949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.594643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.594658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.609344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.609359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.622449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.622464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 19063.00 IOPS, 148.93 MiB/s [2024-11-20T07:38:17.698Z] [2024-11-20 08:38:17.634921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.634936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 00:41:12.969 Latency(us) 00:41:12.969 [2024-11-20T07:38:17.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:12.969 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:12.969 Nvme1n1 : 5.01 19063.09 148.93 0.00 0.00 6707.58 2676.05 11851.09 00:41:12.969 [2024-11-20T07:38:17.698Z] =================================================================================================================== 00:41:12.969 [2024-11-20T07:38:17.698Z] Total : 19063.09 148.93 0.00 0.00 6707.58 2676.05 11851.09 00:41:12.969 [2024-11-20 08:38:17.646239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.646253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.658244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.658259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.670238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.670251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.682240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.969 [2024-11-20 08:38:17.682253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.969 [2024-11-20 08:38:17.694238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.970 [2024-11-20 08:38:17.694250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.229 [2024-11-20 08:38:17.706236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.229 [2024-11-20 08:38:17.706247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.229 [2024-11-20 08:38:17.718234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.229 [2024-11-20 08:38:17.718242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.229 [2024-11-20 08:38:17.730236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.229 [2024-11-20 08:38:17.730247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.229 [2024-11-20 08:38:17.742234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.229 [2024-11-20 08:38:17.742243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.229 [2024-11-20 08:38:17.754234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.229 [2024-11-20 08:38:17.754242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 37: kill: (2313356) - No such process 00:41:13.229 08:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@44 -- # wait 2313356 00:41:13.229 08:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@47 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:13.229 08:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.229 08:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:13.229 08:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.229 08:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@48 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:13.229 08:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.229 08:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:13.229 delay0 00:41:13.229 08:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.229 08:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:13.230 08:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.230 08:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:13.230 08:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.230 08:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:13.230 [2024-11-20 08:38:17.900924] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:21.365 Initializing NVMe Controllers 00:41:21.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:21.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:21.365 Initialization complete. Launching workers. 00:41:21.365 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 291, failed: 10760 00:41:21.365 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 10977, failed to submit 74 00:41:21.365 success 10862, unsuccessful 115, failed 0 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@55 -- # nvmftestfini 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:41:21.365 rmmod nvme_tcp 00:41:21.365 rmmod nvme_fabrics 00:41:21.365 rmmod nvme_keyring 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 2311137 ']' 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 2311137 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2311137 ']' 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2311137 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2311137 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2311137' 00:41:21.365 killing process with pid 2311137 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2311137 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2311137 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@254 -- # local dev 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@257 -- # remove_target_ns 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:41:21.365 08:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@258 -- # delete_main_bridge 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@121 -- # return 0 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@274 -- # iptr 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-save 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-restore 00:41:22.304 00:41:22.304 real 0m35.028s 00:41:22.304 user 0m44.020s 00:41:22.304 sys 0m12.774s 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:22.304 ************************************ 00:41:22.304 END TEST nvmf_zcopy 00:41:22.304 ************************************ 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # trap - SIGINT SIGTERM EXIT 00:41:22.304 00:41:22.304 real 5m12.464s 00:41:22.304 user 10m25.643s 00:41:22.304 sys 2m13.006s 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:22.304 08:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:22.304 ************************************ 00:41:22.304 END TEST nvmf_target_core_interrupt_mode 00:41:22.304 ************************************ 00:41:22.304 08:38:26 nvmf_tcp -- nvmf/nvmf.sh@17 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:22.304 08:38:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:22.304 08:38:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:22.304 08:38:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:22.304 ************************************ 00:41:22.304 START TEST nvmf_interrupt 00:41:22.304 ************************************ 00:41:22.304 08:38:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:22.566 * Looking for test storage... 00:41:22.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:22.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.566 --rc genhtml_branch_coverage=1 00:41:22.566 --rc genhtml_function_coverage=1 00:41:22.566 --rc genhtml_legend=1 00:41:22.566 --rc geninfo_all_blocks=1 00:41:22.566 --rc geninfo_unexecuted_blocks=1 00:41:22.566 00:41:22.566 ' 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:22.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.566 --rc genhtml_branch_coverage=1 00:41:22.566 --rc genhtml_function_coverage=1 00:41:22.566 --rc genhtml_legend=1 00:41:22.566 --rc geninfo_all_blocks=1 00:41:22.566 --rc geninfo_unexecuted_blocks=1 00:41:22.566 00:41:22.566 ' 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:22.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.566 --rc genhtml_branch_coverage=1 00:41:22.566 --rc genhtml_function_coverage=1 00:41:22.566 --rc genhtml_legend=1 00:41:22.566 --rc geninfo_all_blocks=1 00:41:22.566 --rc geninfo_unexecuted_blocks=1 00:41:22.566 00:41:22.566 ' 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:22.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.566 --rc genhtml_branch_coverage=1 00:41:22.566 --rc genhtml_function_coverage=1 00:41:22.566 --rc genhtml_legend=1 00:41:22.566 --rc geninfo_all_blocks=1 00:41:22.566 --rc geninfo_unexecuted_blocks=1 00:41:22.566 00:41:22.566 ' 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:41:22.566 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@50 -- # : 0 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@54 -- # have_pci_nics=0 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@296 -- # prepare_net_devs 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # local -g is_hw=no 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@260 -- # remove_target_ns 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # xtrace_disable 00:41:22.567 08:38:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # pci_devs=() 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # local -a pci_devs 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # pci_net_devs=() 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # pci_drivers=() 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # local -A pci_drivers 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # net_devs=() 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # local -ga net_devs 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # e810=() 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # local -ga e810 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # x722=() 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # local -ga x722 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # mlx=() 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # local -ga mlx 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:30.709 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:30.709 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:41:30.709 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:30.710 Found net devices under 0000:31:00.0: cvl_0_0 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:30.710 Found net devices under 0000:31:00.1: cvl_0_1 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # is_hw=yes 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@247 -- # create_target_ns 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@27 -- # local -gA dev_map 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@28 -- # local -g _dev 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # ips=() 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772161 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:41:30.710 10.0.0.1 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772162 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:41:30.710 10.0.0.2 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:41:30.710 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@38 -- # ping_ips 1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:41:30.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:30.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.428 ms 00:41:30.971 00:41:30.971 --- 10.0.0.1 ping statistics --- 00:41:30.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:30.971 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:41:30.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:30.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:41:30.971 00:41:30.971 --- 10.0.0.2 ping statistics --- 00:41:30.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:30.971 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair++ )) 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@270 -- # return 0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # return 1 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev= 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@160 -- # return 0 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:41:30.971 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target0 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target1 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target1 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # return 1 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev= 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@160 -- # return 0 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # nvmfpid=2320484 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@329 -- # waitforlisten 2320484 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2320484 ']' 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:30.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:30.972 08:38:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:31.232 [2024-11-20 08:38:35.716909] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:31.233 [2024-11-20 08:38:35.717904] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:41:31.233 [2024-11-20 08:38:35.717944] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:31.233 [2024-11-20 08:38:35.801908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:31.233 [2024-11-20 08:38:35.837883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:31.233 [2024-11-20 08:38:35.837917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:31.233 [2024-11-20 08:38:35.837926] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:31.233 [2024-11-20 08:38:35.837932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:31.233 [2024-11-20 08:38:35.837938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:31.233 [2024-11-20 08:38:35.839184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:31.233 [2024-11-20 08:38:35.839276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:31.233 [2024-11-20 08:38:35.893730] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:31.233 [2024-11-20 08:38:35.894280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:31.233 [2024-11-20 08:38:35.894615] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:31.803 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:31.803 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:41:31.803 08:38:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:41:31.803 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:31.803 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.063 08:38:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:32.063 08:38:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:41:32.063 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:41:32.063 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:41:32.063 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:41:32.063 5000+0 records in 00:41:32.063 5000+0 records out 00:41:32.063 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0179652 s, 570 MB/s 00:41:32.063 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:41:32.063 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.064 AIO0 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.064 [2024-11-20 08:38:36.591798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.064 [2024-11-20 08:38:36.620433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2320484 0 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2320484 0 idle 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2320484 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2320484 -w 256 00:41:32.064 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2320484 root 20 0 128.2g 44928 32256 S 6.2 0.0 0:00.25 reactor_0' 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2320484 root 20 0 128.2g 44928 32256 S 6.2 0.0 0:00.25 reactor_0 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2320484 1 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2320484 1 idle 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2320484 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2320484 -w 256 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2320530 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2320530 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2320634 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2320484 0 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2320484 0 busy 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2320484 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2320484 -w 256 00:41:32.324 08:38:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2320484 root 20 0 128.2g 44928 32256 R 73.3 0.0 0:00.36 reactor_0' 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2320484 root 20 0 128.2g 44928 32256 R 73.3 0.0 0:00.36 reactor_0 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=73.3 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=73 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2320484 1 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2320484 1 busy 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2320484 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2320484 -w 256 00:41:32.585 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:32.844 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2320530 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.25 reactor_1' 00:41:32.844 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2320530 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.25 reactor_1 00:41:32.844 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:32.844 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:32.844 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:32.844 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:32.844 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:32.844 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:32.844 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:32.844 08:38:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:32.844 08:38:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2320634 00:41:42.834 Initializing NVMe Controllers 00:41:42.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:42.834 Controller IO queue size 256, less than required. 00:41:42.834 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:42.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:42.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:42.834 Initialization complete. Launching workers. 00:41:42.834 ======================================================== 00:41:42.834 Latency(us) 00:41:42.834 Device Information : IOPS MiB/s Average min max 00:41:42.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16661.70 65.08 15375.02 2633.02 56677.19 00:41:42.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 18634.90 72.79 13739.42 7451.85 29827.88 00:41:42.834 ======================================================== 00:41:42.834 Total : 35296.60 137.88 14511.51 2633.02 56677.19 00:41:42.834 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2320484 0 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2320484 0 idle 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2320484 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2320484 -w 256 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2320484 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.24 reactor_0' 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2320484 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.24 reactor_0 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2320484 1 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2320484 1 idle 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2320484 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:42.834 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:42.835 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:42.835 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:42.835 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:42.835 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:42.835 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2320484 -w 256 00:41:42.835 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:42.835 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2320530 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:41:43.094 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2320530 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:41:43.094 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:43.094 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:43.094 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:43.094 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:43.094 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:43.094 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:43.094 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:43.094 08:38:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:43.094 08:38:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:43.354 08:38:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:43.354 08:38:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:41:43.355 08:38:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:43.355 08:38:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:43.355 08:38:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:41:45.897 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2320484 0 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2320484 0 idle 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2320484 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2320484 -w 256 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2320484 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.47 reactor_0' 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2320484 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.47 reactor_0 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2320484 1 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2320484 1 idle 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2320484 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2320484 -w 256 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2320530 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.13 reactor_1' 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2320530 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.13 reactor_1 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:45.898 08:38:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:46.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@335 -- # nvmfcleanup 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@99 -- # sync 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@102 -- # set +e 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@103 -- # for i in {1..20} 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:41:46.159 rmmod nvme_tcp 00:41:46.159 rmmod nvme_fabrics 00:41:46.159 rmmod nvme_keyring 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@106 -- # set -e 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@107 -- # return 0 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # '[' -n 2320484 ']' 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@337 -- # killprocess 2320484 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2320484 ']' 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2320484 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:46.159 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2320484 00:41:46.421 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:46.421 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:46.421 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2320484' 00:41:46.421 killing process with pid 2320484 00:41:46.421 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2320484 00:41:46.421 08:38:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2320484 00:41:46.421 08:38:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:41:46.421 08:38:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # nvmf_fini 00:41:46.421 08:38:51 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@254 -- # local dev 00:41:46.421 08:38:51 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@257 -- # remove_target_ns 00:41:46.421 08:38:51 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:41:46.421 08:38:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:41:46.421 08:38:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@258 -- # delete_main_bridge 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@121 -- # return 0 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # _dev=0 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # dev_map=() 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@274 -- # iptr 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # iptables-save 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # iptables-restore 00:41:48.970 00:41:48.970 real 0m26.111s 00:41:48.970 user 0m40.619s 00:41:48.970 sys 0m10.077s 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:48.970 08:38:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:48.970 ************************************ 00:41:48.970 END TEST nvmf_interrupt 00:41:48.970 ************************************ 00:41:48.970 00:41:48.970 real 31m39.746s 00:41:48.970 user 62m10.633s 00:41:48.970 sys 11m6.006s 00:41:48.970 08:38:53 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:48.970 08:38:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:48.970 ************************************ 00:41:48.970 END TEST nvmf_tcp 00:41:48.970 ************************************ 00:41:48.970 08:38:53 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:41:48.970 08:38:53 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:48.970 08:38:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:48.970 08:38:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:48.970 08:38:53 -- common/autotest_common.sh@10 -- # set +x 00:41:48.970 ************************************ 00:41:48.970 START TEST spdkcli_nvmf_tcp 00:41:48.970 ************************************ 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:48.970 * Looking for test storage... 00:41:48.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:48.970 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:48.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:48.971 --rc genhtml_branch_coverage=1 00:41:48.971 --rc genhtml_function_coverage=1 00:41:48.971 --rc genhtml_legend=1 00:41:48.971 --rc geninfo_all_blocks=1 00:41:48.971 --rc geninfo_unexecuted_blocks=1 00:41:48.971 00:41:48.971 ' 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:48.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:48.971 --rc genhtml_branch_coverage=1 00:41:48.971 --rc genhtml_function_coverage=1 00:41:48.971 --rc genhtml_legend=1 00:41:48.971 --rc geninfo_all_blocks=1 00:41:48.971 --rc geninfo_unexecuted_blocks=1 00:41:48.971 00:41:48.971 ' 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:48.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:48.971 --rc genhtml_branch_coverage=1 00:41:48.971 --rc genhtml_function_coverage=1 00:41:48.971 --rc genhtml_legend=1 00:41:48.971 --rc geninfo_all_blocks=1 00:41:48.971 --rc geninfo_unexecuted_blocks=1 00:41:48.971 00:41:48.971 ' 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:48.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:48.971 --rc genhtml_branch_coverage=1 00:41:48.971 --rc genhtml_function_coverage=1 00:41:48.971 --rc genhtml_legend=1 00:41:48.971 --rc geninfo_all_blocks=1 00:41:48.971 --rc geninfo_unexecuted_blocks=1 00:41:48.971 00:41:48.971 ' 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@50 -- # : 0 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:41:48.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- nvmf/common.sh@54 -- # have_pci_nics=0 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2323902 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2323902 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2323902 ']' 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:48.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:48.971 08:38:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:48.971 [2024-11-20 08:38:53.542312] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:41:48.971 [2024-11-20 08:38:53.542387] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2323902 ] 00:41:48.971 [2024-11-20 08:38:53.628567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:48.971 [2024-11-20 08:38:53.671953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:48.971 [2024-11-20 08:38:53.671970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:49.913 08:38:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:49.913 08:38:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:49.913 08:38:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:49.913 08:38:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:49.913 08:38:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:49.913 08:38:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:49.913 08:38:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:49.913 08:38:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:49.913 08:38:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:49.913 08:38:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:49.913 08:38:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:49.913 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:49.913 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:49.913 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:49.913 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:49.913 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:49.913 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:49.913 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:49.913 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:49.913 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:49.913 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:49.914 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:49.914 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:49.914 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:49.914 ' 00:41:52.460 [2024-11-20 08:38:56.800830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:53.403 [2024-11-20 08:38:58.008716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:55.950 [2024-11-20 08:39:00.227655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:57.861 [2024-11-20 08:39:02.438314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:59.773 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:59.773 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:59.773 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:59.773 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:59.773 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:59.773 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:59.773 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:59.773 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:59.773 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:59.773 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:59.773 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:59.773 08:39:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:59.773 08:39:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:59.773 08:39:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:59.773 08:39:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:59.773 08:39:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:59.773 08:39:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:59.773 08:39:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:59.773 08:39:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:42:00.034 08:39:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:00.034 08:39:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:00.034 08:39:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:00.034 08:39:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:00.034 08:39:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:00.034 08:39:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:00.034 08:39:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:00.034 08:39:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:00.034 08:39:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:00.034 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:00.034 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:00.034 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:00.034 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:00.034 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:00.034 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:00.034 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:00.034 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:00.034 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:00.034 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:00.034 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:00.034 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:00.034 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:00.034 ' 00:42:06.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:42:06.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:42:06.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:06.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:42:06.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:42:06.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:42:06.616 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:42:06.616 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:06.617 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:42:06.617 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:42:06.617 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:42:06.617 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:42:06.617 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:42:06.617 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2323902 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2323902 ']' 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2323902 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2323902 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2323902' 00:42:06.617 killing process with pid 2323902 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2323902 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2323902 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2323902 ']' 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2323902 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2323902 ']' 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2323902 00:42:06.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2323902) - No such process 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2323902 is not found' 00:42:06.617 Process with pid 2323902 is not found 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:42:06.617 00:42:06.617 real 0m17.406s 00:42:06.617 user 0m37.689s 00:42:06.617 sys 0m0.783s 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:06.617 08:39:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:06.617 ************************************ 00:42:06.617 END TEST spdkcli_nvmf_tcp 00:42:06.617 ************************************ 00:42:06.617 08:39:10 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:06.617 08:39:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:06.617 08:39:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:06.617 08:39:10 -- common/autotest_common.sh@10 -- # set +x 00:42:06.617 ************************************ 00:42:06.617 START TEST nvmf_identify_passthru 00:42:06.617 ************************************ 00:42:06.617 08:39:10 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:06.617 * Looking for test storage... 00:42:06.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:06.617 08:39:10 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:06.617 08:39:10 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:42:06.617 08:39:10 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:06.617 08:39:10 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:42:06.617 08:39:10 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:06.617 08:39:10 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:06.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.617 --rc genhtml_branch_coverage=1 00:42:06.617 --rc genhtml_function_coverage=1 00:42:06.617 --rc genhtml_legend=1 00:42:06.617 --rc geninfo_all_blocks=1 00:42:06.617 --rc geninfo_unexecuted_blocks=1 00:42:06.617 00:42:06.617 ' 00:42:06.617 08:39:10 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:06.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.617 --rc genhtml_branch_coverage=1 00:42:06.617 --rc genhtml_function_coverage=1 00:42:06.617 --rc genhtml_legend=1 00:42:06.617 --rc geninfo_all_blocks=1 00:42:06.617 --rc geninfo_unexecuted_blocks=1 00:42:06.617 00:42:06.617 ' 00:42:06.617 08:39:10 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:06.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.617 --rc genhtml_branch_coverage=1 00:42:06.617 --rc genhtml_function_coverage=1 00:42:06.617 --rc genhtml_legend=1 00:42:06.617 --rc geninfo_all_blocks=1 00:42:06.617 --rc geninfo_unexecuted_blocks=1 00:42:06.617 00:42:06.617 ' 00:42:06.617 08:39:10 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:06.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.617 --rc genhtml_branch_coverage=1 00:42:06.617 --rc genhtml_function_coverage=1 00:42:06.617 --rc genhtml_legend=1 00:42:06.617 --rc geninfo_all_blocks=1 00:42:06.617 --rc geninfo_unexecuted_blocks=1 00:42:06.617 00:42:06.617 ' 00:42:06.617 08:39:10 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:06.617 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:06.617 08:39:10 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:06.618 08:39:10 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.618 08:39:10 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.618 08:39:10 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.618 08:39:10 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:06.618 08:39:10 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@50 -- # : 0 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:42:06.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@54 -- # have_pci_nics=0 00:42:06.618 08:39:10 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:06.618 08:39:10 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:06.618 08:39:10 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:06.618 08:39:10 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:06.618 08:39:10 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:06.618 08:39:10 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.618 08:39:10 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.618 08:39:10 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.618 08:39:10 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:06.618 08:39:10 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.618 08:39:10 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@296 -- # prepare_net_devs 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@258 -- # local -g is_hw=no 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@260 -- # remove_target_ns 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:42:06.618 08:39:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:42:06.618 08:39:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:42:06.618 08:39:10 nvmf_identify_passthru -- nvmf/common.sh@125 -- # xtrace_disable 00:42:06.618 08:39:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:14.780 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:14.780 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@131 -- # pci_devs=() 00:42:14.780 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@131 -- # local -a pci_devs 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@132 -- # pci_net_devs=() 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@133 -- # pci_drivers=() 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@133 -- # local -A pci_drivers 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@135 -- # net_devs=() 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@135 -- # local -ga net_devs 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@136 -- # e810=() 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@136 -- # local -ga e810 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@137 -- # x722=() 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@137 -- # local -ga x722 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@138 -- # mlx=() 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@138 -- # local -ga mlx 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:14.781 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:14.781 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:14.781 Found net devices under 0000:31:00.0: cvl_0_0 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:14.781 Found net devices under 0000:31:00.1: cvl_0_1 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@262 -- # is_hw=yes 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@247 -- # create_target_ns 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@27 -- # local -gA dev_map 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@28 -- # local -g _dev 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # ips=() 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:42:14.781 08:39:18 nvmf_identify_passthru -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:42:14.781 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:42:14.781 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:42:14.781 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:42:14.781 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:42:14.781 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772161 00:42:14.781 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:42:14.781 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:42:14.781 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:42:14.781 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:42:14.781 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:42:14.781 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:42:14.781 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:42:14.781 10.0.0.1 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772162 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:42:14.782 10.0.0.2 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@38 -- # ping_ips 1 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:42:14.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:14.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:42:14.782 00:42:14.782 --- 10.0.0.1 ping statistics --- 00:42:14.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:14.782 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:42:14.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:14.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:42:14.782 00:42:14.782 --- 10.0.0.2 ping statistics --- 00:42:14.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:14.782 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair++ )) 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/common.sh@270 -- # return 0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:42:14.782 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator1 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # return 1 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev= 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@160 -- # return 0 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target0 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target0 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target1 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target1 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # return 1 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev= 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@160 -- # return 0 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:42:14.783 08:39:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:42:14.783 08:39:19 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:42:14.783 08:39:19 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:14.783 08:39:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:14.783 08:39:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:42:14.783 08:39:19 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:42:14.783 08:39:19 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:42:14.783 08:39:19 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:42:14.783 08:39:19 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:42:14.783 08:39:19 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:42:14.783 08:39:19 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:42:14.783 08:39:19 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:42:14.783 08:39:19 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:42:14.783 08:39:19 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:42:15.044 08:39:19 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:42:15.044 08:39:19 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:42:15.044 08:39:19 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:42:15.044 08:39:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:42:15.044 08:39:19 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:42:15.044 08:39:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:42:15.044 08:39:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:42:15.044 08:39:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:42:15.305 08:39:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:42:15.305 08:39:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:42:15.305 08:39:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:42:15.305 08:39:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:42:15.876 08:39:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:42:15.876 08:39:20 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:42:15.876 08:39:20 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:15.876 08:39:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:15.876 08:39:20 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:42:15.876 08:39:20 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:15.876 08:39:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:15.876 08:39:20 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2332140 00:42:15.876 08:39:20 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:15.876 08:39:20 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:42:15.876 08:39:20 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2332140 00:42:15.876 08:39:20 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2332140 ']' 00:42:15.876 08:39:20 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:15.876 08:39:20 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:15.876 08:39:20 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:15.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:15.876 08:39:20 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:15.876 08:39:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:16.137 [2024-11-20 08:39:20.610593] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:42:16.137 [2024-11-20 08:39:20.610651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:16.137 [2024-11-20 08:39:20.696100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:16.137 [2024-11-20 08:39:20.735010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:16.137 [2024-11-20 08:39:20.735049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:16.137 [2024-11-20 08:39:20.735057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:16.137 [2024-11-20 08:39:20.735064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:16.137 [2024-11-20 08:39:20.735070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:16.137 [2024-11-20 08:39:20.736888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:16.137 [2024-11-20 08:39:20.736983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:16.137 [2024-11-20 08:39:20.737046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:16.137 [2024-11-20 08:39:20.737047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:16.708 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:16.708 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:42:16.708 08:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:42:16.708 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.708 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:16.708 INFO: Log level set to 20 00:42:16.708 INFO: Requests: 00:42:16.708 { 00:42:16.708 "jsonrpc": "2.0", 00:42:16.708 "method": "nvmf_set_config", 00:42:16.708 "id": 1, 00:42:16.708 "params": { 00:42:16.708 "admin_cmd_passthru": { 00:42:16.708 "identify_ctrlr": true 00:42:16.708 } 00:42:16.708 } 00:42:16.708 } 00:42:16.708 00:42:16.708 INFO: response: 00:42:16.708 { 00:42:16.708 "jsonrpc": "2.0", 00:42:16.708 "id": 1, 00:42:16.708 "result": true 00:42:16.708 } 00:42:16.708 00:42:16.708 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.708 08:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:42:16.708 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.708 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:16.708 INFO: Setting log level to 20 00:42:16.708 INFO: Setting log level to 20 00:42:16.708 INFO: Log level set to 20 00:42:16.708 INFO: Log level set to 20 00:42:16.708 INFO: Requests: 00:42:16.708 { 00:42:16.708 "jsonrpc": "2.0", 00:42:16.708 "method": "framework_start_init", 00:42:16.708 "id": 1 00:42:16.708 } 00:42:16.708 00:42:16.708 INFO: Requests: 00:42:16.708 { 00:42:16.708 "jsonrpc": "2.0", 00:42:16.708 "method": "framework_start_init", 00:42:16.708 "id": 1 00:42:16.708 } 00:42:16.708 00:42:16.969 [2024-11-20 08:39:21.485453] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:42:16.969 INFO: response: 00:42:16.969 { 00:42:16.969 "jsonrpc": "2.0", 00:42:16.969 "id": 1, 00:42:16.969 "result": true 00:42:16.969 } 00:42:16.969 00:42:16.969 INFO: response: 00:42:16.969 { 00:42:16.969 "jsonrpc": "2.0", 00:42:16.969 "id": 1, 00:42:16.969 "result": true 00:42:16.969 } 00:42:16.970 00:42:16.970 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.970 08:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:16.970 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.970 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:16.970 INFO: Setting log level to 40 00:42:16.970 INFO: Setting log level to 40 00:42:16.970 INFO: Setting log level to 40 00:42:16.970 [2024-11-20 08:39:21.498785] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:16.970 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.970 08:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:42:16.970 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:16.970 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:16.970 08:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:42:16.970 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.970 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:17.231 Nvme0n1 00:42:17.231 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.231 08:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:42:17.231 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.231 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:17.231 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.231 08:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:42:17.231 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.231 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:17.231 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.231 08:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:17.231 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.231 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:17.231 [2024-11-20 08:39:21.900351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:17.231 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.231 08:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:42:17.231 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.231 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:17.231 [ 00:42:17.231 { 00:42:17.231 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:42:17.232 "subtype": "Discovery", 00:42:17.232 "listen_addresses": [], 00:42:17.232 "allow_any_host": true, 00:42:17.232 "hosts": [] 00:42:17.232 }, 00:42:17.232 { 00:42:17.232 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:17.232 "subtype": "NVMe", 00:42:17.232 "listen_addresses": [ 00:42:17.232 { 00:42:17.232 "trtype": "TCP", 00:42:17.232 "adrfam": "IPv4", 00:42:17.232 "traddr": "10.0.0.2", 00:42:17.232 "trsvcid": "4420" 00:42:17.232 } 00:42:17.232 ], 00:42:17.232 "allow_any_host": true, 00:42:17.232 "hosts": [], 00:42:17.232 "serial_number": "SPDK00000000000001", 00:42:17.232 "model_number": "SPDK bdev Controller", 00:42:17.232 "max_namespaces": 1, 00:42:17.232 "min_cntlid": 1, 00:42:17.232 "max_cntlid": 65519, 00:42:17.232 "namespaces": [ 00:42:17.232 { 00:42:17.232 "nsid": 1, 00:42:17.232 "bdev_name": "Nvme0n1", 00:42:17.232 "name": "Nvme0n1", 00:42:17.232 "nguid": "3634473052605494002538450000002D", 00:42:17.232 "uuid": "36344730-5260-5494-0025-38450000002d" 00:42:17.232 } 00:42:17.232 ] 00:42:17.232 } 00:42:17.232 ] 00:42:17.232 08:39:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.232 08:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:17.232 08:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:42:17.232 08:39:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:42:17.493 08:39:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:42:17.493 08:39:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:17.493 08:39:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:42:17.493 08:39:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:42:17.756 08:39:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:42:17.756 08:39:22 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:42:17.756 08:39:22 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:42:17.756 08:39:22 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:17.756 08:39:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.756 08:39:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:17.756 08:39:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.756 08:39:22 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:42:17.756 08:39:22 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:42:17.756 08:39:22 nvmf_identify_passthru -- nvmf/common.sh@335 -- # nvmfcleanup 00:42:17.756 08:39:22 nvmf_identify_passthru -- nvmf/common.sh@99 -- # sync 00:42:17.756 08:39:22 nvmf_identify_passthru -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:42:17.756 08:39:22 nvmf_identify_passthru -- nvmf/common.sh@102 -- # set +e 00:42:17.756 08:39:22 nvmf_identify_passthru -- nvmf/common.sh@103 -- # for i in {1..20} 00:42:17.756 08:39:22 nvmf_identify_passthru -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:42:17.756 rmmod nvme_tcp 00:42:17.756 rmmod nvme_fabrics 00:42:17.756 rmmod nvme_keyring 00:42:17.756 08:39:22 nvmf_identify_passthru -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:42:17.756 08:39:22 nvmf_identify_passthru -- nvmf/common.sh@106 -- # set -e 00:42:17.756 08:39:22 nvmf_identify_passthru -- nvmf/common.sh@107 -- # return 0 00:42:17.756 08:39:22 nvmf_identify_passthru -- nvmf/common.sh@336 -- # '[' -n 2332140 ']' 00:42:17.756 08:39:22 nvmf_identify_passthru -- nvmf/common.sh@337 -- # killprocess 2332140 00:42:17.756 08:39:22 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2332140 ']' 00:42:17.756 08:39:22 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2332140 00:42:17.756 08:39:22 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:42:17.756 08:39:22 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:17.756 08:39:22 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2332140 00:42:18.018 08:39:22 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:18.018 08:39:22 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:18.018 08:39:22 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2332140' 00:42:18.018 killing process with pid 2332140 00:42:18.018 08:39:22 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2332140 00:42:18.018 08:39:22 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2332140 00:42:18.347 08:39:22 nvmf_identify_passthru -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:42:18.347 08:39:22 nvmf_identify_passthru -- nvmf/common.sh@342 -- # nvmf_fini 00:42:18.347 08:39:22 nvmf_identify_passthru -- nvmf/setup.sh@254 -- # local dev 00:42:18.347 08:39:22 nvmf_identify_passthru -- nvmf/setup.sh@257 -- # remove_target_ns 00:42:18.347 08:39:22 nvmf_identify_passthru -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:42:18.347 08:39:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:42:18.347 08:39:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@258 -- # delete_main_bridge 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@121 -- # return 0 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # _dev=0 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # dev_map=() 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/setup.sh@274 -- # iptr 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/common.sh@548 -- # iptables-save 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:42:20.294 08:39:24 nvmf_identify_passthru -- nvmf/common.sh@548 -- # iptables-restore 00:42:20.294 00:42:20.294 real 0m14.108s 00:42:20.294 user 0m10.660s 00:42:20.294 sys 0m7.274s 00:42:20.294 08:39:24 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:20.294 08:39:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:20.294 ************************************ 00:42:20.294 END TEST nvmf_identify_passthru 00:42:20.294 ************************************ 00:42:20.294 08:39:24 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:20.294 08:39:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:20.294 08:39:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:20.294 08:39:24 -- common/autotest_common.sh@10 -- # set +x 00:42:20.294 ************************************ 00:42:20.294 START TEST nvmf_dif 00:42:20.294 ************************************ 00:42:20.294 08:39:24 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:20.294 * Looking for test storage... 00:42:20.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:20.294 08:39:24 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:20.294 08:39:24 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:42:20.294 08:39:24 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:20.555 08:39:25 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:20.555 08:39:25 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:42:20.555 08:39:25 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:20.555 08:39:25 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:20.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:20.555 --rc genhtml_branch_coverage=1 00:42:20.555 --rc genhtml_function_coverage=1 00:42:20.555 --rc genhtml_legend=1 00:42:20.555 --rc geninfo_all_blocks=1 00:42:20.555 --rc geninfo_unexecuted_blocks=1 00:42:20.555 00:42:20.555 ' 00:42:20.555 08:39:25 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:20.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:20.555 --rc genhtml_branch_coverage=1 00:42:20.555 --rc genhtml_function_coverage=1 00:42:20.555 --rc genhtml_legend=1 00:42:20.555 --rc geninfo_all_blocks=1 00:42:20.555 --rc geninfo_unexecuted_blocks=1 00:42:20.555 00:42:20.555 ' 00:42:20.555 08:39:25 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:20.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:20.555 --rc genhtml_branch_coverage=1 00:42:20.555 --rc genhtml_function_coverage=1 00:42:20.555 --rc genhtml_legend=1 00:42:20.555 --rc geninfo_all_blocks=1 00:42:20.555 --rc geninfo_unexecuted_blocks=1 00:42:20.555 00:42:20.555 ' 00:42:20.555 08:39:25 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:20.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:20.555 --rc genhtml_branch_coverage=1 00:42:20.555 --rc genhtml_function_coverage=1 00:42:20.555 --rc genhtml_legend=1 00:42:20.555 --rc geninfo_all_blocks=1 00:42:20.555 --rc geninfo_unexecuted_blocks=1 00:42:20.555 00:42:20.555 ' 00:42:20.555 08:39:25 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:20.555 08:39:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:42:20.555 08:39:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:20.555 08:39:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:20.555 08:39:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:20.555 08:39:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:20.555 08:39:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:20.555 08:39:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:42:20.555 08:39:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:20.555 08:39:25 nvmf_dif -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:42:20.555 08:39:25 nvmf_dif -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:20.555 08:39:25 nvmf_dif -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:20.555 08:39:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:20.555 08:39:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:20.556 08:39:25 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:42:20.556 08:39:25 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:20.556 08:39:25 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:20.556 08:39:25 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:20.556 08:39:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:20.556 08:39:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:20.556 08:39:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:20.556 08:39:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:42:20.556 08:39:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:42:20.556 08:39:25 nvmf_dif -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:42:20.556 08:39:25 nvmf_dif -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:42:20.556 08:39:25 nvmf_dif -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@50 -- # : 0 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:42:20.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@54 -- # have_pci_nics=0 00:42:20.556 08:39:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:42:20.556 08:39:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:42:20.556 08:39:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:42:20.556 08:39:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:42:20.556 08:39:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@296 -- # prepare_net_devs 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@258 -- # local -g is_hw=no 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@260 -- # remove_target_ns 00:42:20.556 08:39:25 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:42:20.556 08:39:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:42:20.556 08:39:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:42:20.556 08:39:25 nvmf_dif -- nvmf/common.sh@125 -- # xtrace_disable 00:42:20.556 08:39:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@131 -- # pci_devs=() 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@131 -- # local -a pci_devs 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@132 -- # pci_net_devs=() 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@133 -- # pci_drivers=() 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@133 -- # local -A pci_drivers 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@135 -- # net_devs=() 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@135 -- # local -ga net_devs 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@136 -- # e810=() 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@136 -- # local -ga e810 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@137 -- # x722=() 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@137 -- # local -ga x722 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@138 -- # mlx=() 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@138 -- # local -ga mlx 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:28.696 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:28.696 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:28.696 Found net devices under 0000:31:00.0: cvl_0_0 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:28.696 Found net devices under 0000:31:00.1: cvl_0_1 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@262 -- # is_hw=yes 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@247 -- # create_target_ns 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@27 -- # local -gA dev_map 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@28 -- # local -g _dev 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772161 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:42:28.696 10.0.0.1 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772162 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:42:28.696 08:39:32 nvmf_dif -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:42:28.697 10.0.0.2 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:42:28.697 08:39:32 nvmf_dif -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@38 -- # ping_ips 1 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:42:28.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:28.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.475 ms 00:42:28.697 00:42:28.697 --- 10.0.0.1 ping statistics --- 00:42:28.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:28.697 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:42:28.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:28.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:42:28.697 00:42:28.697 --- 10.0.0.2 ping statistics --- 00:42:28.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:28.697 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair++ )) 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:28.697 08:39:32 nvmf_dif -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:28.697 08:39:32 nvmf_dif -- nvmf/common.sh@270 -- # return 0 00:42:28.697 08:39:32 nvmf_dif -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:42:28.697 08:39:32 nvmf_dif -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:31.998 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:42:31.998 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:42:31.998 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:42:31.998 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:42:31.998 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:42:31.998 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:42:31.998 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:42:31.998 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:42:32.259 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:42:32.259 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:42:32.259 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:42:32.259 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:42:32.259 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:42:32.259 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:42:32.259 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:42:32.259 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:42:32.259 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:42:32.520 08:39:37 nvmf_dif -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator1 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@100 -- # return 1 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@159 -- # dev= 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@160 -- # return 0 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target1 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target1 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@100 -- # return 1 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@159 -- # dev= 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@160 -- # return 0 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:42:32.520 08:39:37 nvmf_dif -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:32.520 08:39:37 nvmf_dif -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:42:32.520 08:39:37 nvmf_dif -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:32.520 08:39:37 nvmf_dif -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:42:32.520 08:39:37 nvmf_dif -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:42:32.520 08:39:37 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:42:32.520 08:39:37 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:42:32.520 08:39:37 nvmf_dif -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:42:32.520 08:39:37 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:32.520 08:39:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:32.520 08:39:37 nvmf_dif -- nvmf/common.sh@328 -- # nvmfpid=2338939 00:42:32.520 08:39:37 nvmf_dif -- nvmf/common.sh@329 -- # waitforlisten 2338939 00:42:32.520 08:39:37 nvmf_dif -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:42:32.520 08:39:37 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2338939 ']' 00:42:32.521 08:39:37 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:32.521 08:39:37 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:32.521 08:39:37 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:32.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:32.521 08:39:37 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:32.521 08:39:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:32.781 [2024-11-20 08:39:37.300059] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:42:32.781 [2024-11-20 08:39:37.300122] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:32.781 [2024-11-20 08:39:37.391149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:32.781 [2024-11-20 08:39:37.432477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:32.781 [2024-11-20 08:39:37.432512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:32.781 [2024-11-20 08:39:37.432524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:32.781 [2024-11-20 08:39:37.432531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:32.781 [2024-11-20 08:39:37.432538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:32.781 [2024-11-20 08:39:37.433137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:33.724 08:39:38 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:33.724 08:39:38 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:42:33.724 08:39:38 nvmf_dif -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:42:33.724 08:39:38 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:33.724 08:39:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:33.724 08:39:38 nvmf_dif -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:33.724 08:39:38 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:42:33.724 08:39:38 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:42:33.724 08:39:38 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.724 08:39:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:33.724 [2024-11-20 08:39:38.138785] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:33.724 08:39:38 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.724 08:39:38 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:42:33.724 08:39:38 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:33.724 08:39:38 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:33.724 08:39:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:33.724 ************************************ 00:42:33.724 START TEST fio_dif_1_default 00:42:33.724 ************************************ 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:33.724 bdev_null0 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:33.724 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:33.725 [2024-11-20 08:39:38.223144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # config=() 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # local subsystem config 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:42:33.725 { 00:42:33.725 "params": { 00:42:33.725 "name": "Nvme$subsystem", 00:42:33.725 "trtype": "$TEST_TRANSPORT", 00:42:33.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:33.725 "adrfam": "ipv4", 00:42:33.725 "trsvcid": "$NVMF_PORT", 00:42:33.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:33.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:33.725 "hdgst": ${hdgst:-false}, 00:42:33.725 "ddgst": ${ddgst:-false} 00:42:33.725 }, 00:42:33.725 "method": "bdev_nvme_attach_controller" 00:42:33.725 } 00:42:33.725 EOF 00:42:33.725 )") 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # cat 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@396 -- # jq . 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@397 -- # IFS=, 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:42:33.725 "params": { 00:42:33.725 "name": "Nvme0", 00:42:33.725 "trtype": "tcp", 00:42:33.725 "traddr": "10.0.0.2", 00:42:33.725 "adrfam": "ipv4", 00:42:33.725 "trsvcid": "4420", 00:42:33.725 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:33.725 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:33.725 "hdgst": false, 00:42:33.725 "ddgst": false 00:42:33.725 }, 00:42:33.725 "method": "bdev_nvme_attach_controller" 00:42:33.725 }' 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:33.725 08:39:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:33.985 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:33.986 fio-3.35 00:42:33.986 Starting 1 thread 00:42:46.219 00:42:46.219 filename0: (groupid=0, jobs=1): err= 0: pid=2339510: Wed Nov 20 08:39:49 2024 00:42:46.219 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:42:46.219 slat (nsec): min=5409, max=32558, avg=6216.31, stdev=1609.92 00:42:46.219 clat (usec): min=40918, max=42945, avg=40995.16, stdev=140.97 00:42:46.219 lat (usec): min=40924, max=42977, avg=41001.38, stdev=141.62 00:42:46.219 clat percentiles (usec): 00:42:46.219 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:46.219 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:46.219 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:46.219 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:42:46.219 | 99.99th=[42730] 00:42:46.219 bw ( KiB/s): min= 384, max= 416, per=99.46%, avg=388.80, stdev=11.72, samples=20 00:42:46.219 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:42:46.219 lat (msec) : 50=100.00% 00:42:46.219 cpu : usr=93.49%, sys=6.31%, ctx=13, majf=0, minf=234 00:42:46.219 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:46.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.219 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:46.219 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:46.219 00:42:46.219 Run status group 0 (all jobs): 00:42:46.219 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10007-10007msec 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.219 00:42:46.219 real 0m11.229s 00:42:46.219 user 0m22.164s 00:42:46.219 sys 0m0.927s 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:46.219 ************************************ 00:42:46.219 END TEST fio_dif_1_default 00:42:46.219 ************************************ 00:42:46.219 08:39:49 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:46.219 08:39:49 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:46.219 08:39:49 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:46.219 08:39:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:46.219 ************************************ 00:42:46.219 START TEST fio_dif_1_multi_subsystems 00:42:46.219 ************************************ 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.219 bdev_null0 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.219 [2024-11-20 08:39:49.520310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.219 bdev_null1 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.219 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # config=() 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # local subsystem config 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:42:46.220 { 00:42:46.220 "params": { 00:42:46.220 "name": "Nvme$subsystem", 00:42:46.220 "trtype": "$TEST_TRANSPORT", 00:42:46.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:46.220 "adrfam": "ipv4", 00:42:46.220 "trsvcid": "$NVMF_PORT", 00:42:46.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:46.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:46.220 "hdgst": ${hdgst:-false}, 00:42:46.220 "ddgst": ${ddgst:-false} 00:42:46.220 }, 00:42:46.220 "method": "bdev_nvme_attach_controller" 00:42:46.220 } 00:42:46.220 EOF 00:42:46.220 )") 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:42:46.220 { 00:42:46.220 "params": { 00:42:46.220 "name": "Nvme$subsystem", 00:42:46.220 "trtype": "$TEST_TRANSPORT", 00:42:46.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:46.220 "adrfam": "ipv4", 00:42:46.220 "trsvcid": "$NVMF_PORT", 00:42:46.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:46.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:46.220 "hdgst": ${hdgst:-false}, 00:42:46.220 "ddgst": ${ddgst:-false} 00:42:46.220 }, 00:42:46.220 "method": "bdev_nvme_attach_controller" 00:42:46.220 } 00:42:46.220 EOF 00:42:46.220 )") 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@396 -- # jq . 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@397 -- # IFS=, 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:42:46.220 "params": { 00:42:46.220 "name": "Nvme0", 00:42:46.220 "trtype": "tcp", 00:42:46.220 "traddr": "10.0.0.2", 00:42:46.220 "adrfam": "ipv4", 00:42:46.220 "trsvcid": "4420", 00:42:46.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:46.220 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:46.220 "hdgst": false, 00:42:46.220 "ddgst": false 00:42:46.220 }, 00:42:46.220 "method": "bdev_nvme_attach_controller" 00:42:46.220 },{ 00:42:46.220 "params": { 00:42:46.220 "name": "Nvme1", 00:42:46.220 "trtype": "tcp", 00:42:46.220 "traddr": "10.0.0.2", 00:42:46.220 "adrfam": "ipv4", 00:42:46.220 "trsvcid": "4420", 00:42:46.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:46.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:46.220 "hdgst": false, 00:42:46.220 "ddgst": false 00:42:46.220 }, 00:42:46.220 "method": "bdev_nvme_attach_controller" 00:42:46.220 }' 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:46.220 08:39:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:46.220 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:46.220 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:46.220 fio-3.35 00:42:46.220 Starting 2 threads 00:42:56.213 00:42:56.213 filename0: (groupid=0, jobs=1): err= 0: pid=2341858: Wed Nov 20 08:40:00 2024 00:42:56.213 read: IOPS=187, BW=752KiB/s (770kB/s)(7536KiB/10025msec) 00:42:56.213 slat (nsec): min=5415, max=37006, avg=6864.82, stdev=3347.07 00:42:56.213 clat (usec): min=489, max=43033, avg=21263.63, stdev=20163.44 00:42:56.213 lat (usec): min=495, max=43039, avg=21270.49, stdev=20163.07 00:42:56.213 clat percentiles (usec): 00:42:56.213 | 1.00th=[ 635], 5.00th=[ 717], 10.00th=[ 783], 20.00th=[ 889], 00:42:56.213 | 30.00th=[ 914], 40.00th=[ 947], 50.00th=[41157], 60.00th=[41157], 00:42:56.213 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:42:56.213 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:42:56.213 | 99.99th=[43254] 00:42:56.213 bw ( KiB/s): min= 640, max= 769, per=56.91%, avg=752.05, stdev=35.23, samples=20 00:42:56.213 iops : min= 160, max= 192, avg=188.00, stdev= 8.80, samples=20 00:42:56.213 lat (usec) : 500=0.11%, 750=8.39%, 1000=37.69% 00:42:56.213 lat (msec) : 2=3.29%, 50=50.53% 00:42:56.213 cpu : usr=95.25%, sys=4.52%, ctx=13, majf=0, minf=120 00:42:56.213 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:56.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:56.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:56.213 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:56.213 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:56.213 filename1: (groupid=0, jobs=1): err= 0: pid=2341859: Wed Nov 20 08:40:00 2024 00:42:56.213 read: IOPS=142, BW=570KiB/s (584kB/s)(5712KiB/10023msec) 00:42:56.213 slat (nsec): min=5418, max=37059, avg=7048.97, stdev=3622.25 00:42:56.213 clat (usec): min=618, max=43136, avg=28053.84, stdev=18937.64 00:42:56.213 lat (usec): min=624, max=43165, avg=28060.89, stdev=18936.88 00:42:56.213 clat percentiles (usec): 00:42:56.213 | 1.00th=[ 709], 5.00th=[ 865], 10.00th=[ 898], 20.00th=[ 930], 00:42:56.213 | 30.00th=[ 1237], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:56.213 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:42:56.213 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:42:56.213 | 99.99th=[43254] 00:42:56.213 bw ( KiB/s): min= 384, max= 768, per=43.06%, avg=569.60, stdev=182.97, samples=20 00:42:56.213 iops : min= 96, max= 192, avg=142.40, stdev=45.74, samples=20 00:42:56.213 lat (usec) : 750=1.75%, 1000=26.82% 00:42:56.213 lat (msec) : 2=4.20%, 50=67.23% 00:42:56.213 cpu : usr=95.38%, sys=4.37%, ctx=12, majf=0, minf=145 00:42:56.213 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:56.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:56.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:56.213 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:56.213 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:56.213 00:42:56.213 Run status group 0 (all jobs): 00:42:56.213 READ: bw=1321KiB/s (1353kB/s), 570KiB/s-752KiB/s (584kB/s-770kB/s), io=12.9MiB (13.6MB), run=10023-10025msec 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.473 00:42:56.473 real 0m11.500s 00:42:56.473 user 0m34.632s 00:42:56.473 sys 0m1.284s 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:56.473 08:40:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:56.473 ************************************ 00:42:56.473 END TEST fio_dif_1_multi_subsystems 00:42:56.473 ************************************ 00:42:56.473 08:40:01 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:56.473 08:40:01 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:56.473 08:40:01 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:56.473 08:40:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:56.473 ************************************ 00:42:56.473 START TEST fio_dif_rand_params 00:42:56.473 ************************************ 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:56.473 bdev_null0 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.473 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:56.473 [2024-11-20 08:40:01.106097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:42:56.474 { 00:42:56.474 "params": { 00:42:56.474 "name": "Nvme$subsystem", 00:42:56.474 "trtype": "$TEST_TRANSPORT", 00:42:56.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:56.474 "adrfam": "ipv4", 00:42:56.474 "trsvcid": "$NVMF_PORT", 00:42:56.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:56.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:56.474 "hdgst": ${hdgst:-false}, 00:42:56.474 "ddgst": ${ddgst:-false} 00:42:56.474 }, 00:42:56.474 "method": "bdev_nvme_attach_controller" 00:42:56.474 } 00:42:56.474 EOF 00:42:56.474 )") 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:42:56.474 "params": { 00:42:56.474 "name": "Nvme0", 00:42:56.474 "trtype": "tcp", 00:42:56.474 "traddr": "10.0.0.2", 00:42:56.474 "adrfam": "ipv4", 00:42:56.474 "trsvcid": "4420", 00:42:56.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:56.474 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:56.474 "hdgst": false, 00:42:56.474 "ddgst": false 00:42:56.474 }, 00:42:56.474 "method": "bdev_nvme_attach_controller" 00:42:56.474 }' 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:56.474 08:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:57.079 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:57.079 ... 00:42:57.079 fio-3.35 00:42:57.079 Starting 3 threads 00:43:03.682 00:43:03.682 filename0: (groupid=0, jobs=1): err= 0: pid=2344054: Wed Nov 20 08:40:07 2024 00:43:03.682 read: IOPS=174, BW=21.8MiB/s (22.8MB/s)(109MiB/5007msec) 00:43:03.682 slat (nsec): min=5656, max=32519, avg=8241.62, stdev=1775.62 00:43:03.683 clat (msec): min=5, max=129, avg=17.21, stdev=17.15 00:43:03.683 lat (msec): min=5, max=129, avg=17.22, stdev=17.15 00:43:03.683 clat percentiles (msec): 00:43:03.683 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:43:03.683 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:43:03.683 | 70.00th=[ 13], 80.00th=[ 15], 90.00th=[ 51], 95.00th=[ 53], 00:43:03.683 | 99.00th=[ 91], 99.50th=[ 91], 99.90th=[ 130], 99.95th=[ 130], 00:43:03.683 | 99.99th=[ 130] 00:43:03.683 bw ( KiB/s): min=18176, max=27904, per=26.84%, avg=22272.00, stdev=3300.54, samples=10 00:43:03.683 iops : min= 142, max= 218, avg=174.00, stdev=25.79, samples=10 00:43:03.683 lat (msec) : 10=31.88%, 20=54.13%, 50=4.13%, 100=9.75%, 250=0.11% 00:43:03.683 cpu : usr=95.37%, sys=4.37%, ctx=6, majf=0, minf=23 00:43:03.683 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:03.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.683 issued rwts: total=872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.683 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:03.683 filename0: (groupid=0, jobs=1): err= 0: pid=2344055: Wed Nov 20 08:40:07 2024 00:43:03.683 read: IOPS=169, BW=21.2MiB/s (22.2MB/s)(107MiB/5041msec) 00:43:03.683 slat (nsec): min=7940, max=60422, avg=9494.03, stdev=2929.16 00:43:03.683 clat (usec): min=5437, max=93385, avg=17694.60, stdev=16450.89 00:43:03.683 lat (usec): min=5446, max=93393, avg=17704.09, stdev=16450.96 00:43:03.683 clat percentiles (usec): 00:43:03.683 | 1.00th=[ 5997], 5.00th=[ 7373], 10.00th=[ 8029], 20.00th=[ 8979], 00:43:03.683 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11469], 60.00th=[11994], 00:43:03.683 | 70.00th=[13173], 80.00th=[14484], 90.00th=[50070], 95.00th=[52167], 00:43:03.683 | 99.00th=[90702], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:43:03.683 | 99.99th=[93848] 00:43:03.683 bw ( KiB/s): min=12800, max=25088, per=26.26%, avg=21785.60, stdev=3525.52, samples=10 00:43:03.683 iops : min= 100, max= 196, avg=170.20, stdev=27.54, samples=10 00:43:03.683 lat (msec) : 10=29.98%, 20=53.98%, 50=5.74%, 100=10.30% 00:43:03.683 cpu : usr=93.81%, sys=5.06%, ctx=291, majf=0, minf=77 00:43:03.683 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:03.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.683 issued rwts: total=854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.683 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:03.683 filename0: (groupid=0, jobs=1): err= 0: pid=2344056: Wed Nov 20 08:40:07 2024 00:43:03.683 read: IOPS=306, BW=38.3MiB/s (40.1MB/s)(193MiB/5046msec) 00:43:03.683 slat (nsec): min=7934, max=33520, avg=8800.39, stdev=1374.20 00:43:03.683 clat (usec): min=4757, max=53113, avg=9759.98, stdev=5998.59 00:43:03.683 lat (usec): min=4765, max=53122, avg=9768.78, stdev=5998.60 00:43:03.683 clat percentiles (usec): 00:43:03.683 | 1.00th=[ 5080], 5.00th=[ 5866], 10.00th=[ 6128], 20.00th=[ 6980], 00:43:03.683 | 30.00th=[ 7504], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9372], 00:43:03.683 | 70.00th=[10159], 80.00th=[11207], 90.00th=[12649], 95.00th=[13698], 00:43:03.683 | 99.00th=[47449], 99.50th=[48497], 99.90th=[52167], 99.95th=[53216], 00:43:03.683 | 99.99th=[53216] 00:43:03.683 bw ( KiB/s): min=34816, max=44800, per=47.61%, avg=39500.80, stdev=3483.13, samples=10 00:43:03.683 iops : min= 272, max= 350, avg=308.60, stdev=27.21, samples=10 00:43:03.683 lat (msec) : 10=67.44%, 20=30.49%, 50=1.75%, 100=0.32% 00:43:03.683 cpu : usr=95.00%, sys=4.74%, ctx=7, majf=0, minf=161 00:43:03.683 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:03.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.683 issued rwts: total=1545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.683 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:03.683 00:43:03.683 Run status group 0 (all jobs): 00:43:03.683 READ: bw=81.0MiB/s (85.0MB/s), 21.2MiB/s-38.3MiB/s (22.2MB/s-40.1MB/s), io=409MiB (429MB), run=5007-5046msec 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.683 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.683 bdev_null0 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.684 [2024-11-20 08:40:07.328063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.684 bdev_null1 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.684 bdev_null2 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:03.684 08:40:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:03.684 { 00:43:03.684 "params": { 00:43:03.684 "name": "Nvme$subsystem", 00:43:03.684 "trtype": "$TEST_TRANSPORT", 00:43:03.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:03.685 "adrfam": "ipv4", 00:43:03.685 "trsvcid": "$NVMF_PORT", 00:43:03.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:03.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:03.685 "hdgst": ${hdgst:-false}, 00:43:03.685 "ddgst": ${ddgst:-false} 00:43:03.685 }, 00:43:03.685 "method": "bdev_nvme_attach_controller" 00:43:03.685 } 00:43:03.685 EOF 00:43:03.685 )") 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:03.685 { 00:43:03.685 "params": { 00:43:03.685 "name": "Nvme$subsystem", 00:43:03.685 "trtype": "$TEST_TRANSPORT", 00:43:03.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:03.685 "adrfam": "ipv4", 00:43:03.685 "trsvcid": "$NVMF_PORT", 00:43:03.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:03.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:03.685 "hdgst": ${hdgst:-false}, 00:43:03.685 "ddgst": ${ddgst:-false} 00:43:03.685 }, 00:43:03.685 "method": "bdev_nvme_attach_controller" 00:43:03.685 } 00:43:03.685 EOF 00:43:03.685 )") 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:03.685 { 00:43:03.685 "params": { 00:43:03.685 "name": "Nvme$subsystem", 00:43:03.685 "trtype": "$TEST_TRANSPORT", 00:43:03.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:03.685 "adrfam": "ipv4", 00:43:03.685 "trsvcid": "$NVMF_PORT", 00:43:03.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:03.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:03.685 "hdgst": ${hdgst:-false}, 00:43:03.685 "ddgst": ${ddgst:-false} 00:43:03.685 }, 00:43:03.685 "method": "bdev_nvme_attach_controller" 00:43:03.685 } 00:43:03.685 EOF 00:43:03.685 )") 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:43:03.685 "params": { 00:43:03.685 "name": "Nvme0", 00:43:03.685 "trtype": "tcp", 00:43:03.685 "traddr": "10.0.0.2", 00:43:03.685 "adrfam": "ipv4", 00:43:03.685 "trsvcid": "4420", 00:43:03.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:03.685 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:03.685 "hdgst": false, 00:43:03.685 "ddgst": false 00:43:03.685 }, 00:43:03.685 "method": "bdev_nvme_attach_controller" 00:43:03.685 },{ 00:43:03.685 "params": { 00:43:03.685 "name": "Nvme1", 00:43:03.685 "trtype": "tcp", 00:43:03.685 "traddr": "10.0.0.2", 00:43:03.685 "adrfam": "ipv4", 00:43:03.685 "trsvcid": "4420", 00:43:03.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:03.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:03.685 "hdgst": false, 00:43:03.685 "ddgst": false 00:43:03.685 }, 00:43:03.685 "method": "bdev_nvme_attach_controller" 00:43:03.685 },{ 00:43:03.685 "params": { 00:43:03.685 "name": "Nvme2", 00:43:03.685 "trtype": "tcp", 00:43:03.685 "traddr": "10.0.0.2", 00:43:03.685 "adrfam": "ipv4", 00:43:03.685 "trsvcid": "4420", 00:43:03.685 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:43:03.685 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:43:03.685 "hdgst": false, 00:43:03.685 "ddgst": false 00:43:03.685 }, 00:43:03.685 "method": "bdev_nvme_attach_controller" 00:43:03.685 }' 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:03.685 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:03.686 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:03.686 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:03.686 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:03.686 08:40:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:03.686 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:03.686 ... 00:43:03.686 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:03.686 ... 00:43:03.686 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:03.686 ... 00:43:03.686 fio-3.35 00:43:03.686 Starting 24 threads 00:43:15.925 00:43:15.925 filename0: (groupid=0, jobs=1): err= 0: pid=2345565: Wed Nov 20 08:40:18 2024 00:43:15.925 read: IOPS=567, BW=2271KiB/s (2325kB/s)(22.2MiB/10005msec) 00:43:15.925 slat (nsec): min=5567, max=82215, avg=9324.01, stdev=6973.17 00:43:15.925 clat (usec): min=3908, max=34660, avg=28103.01, stdev=6039.22 00:43:15.925 lat (usec): min=3928, max=34666, avg=28112.33, stdev=6039.81 00:43:15.925 clat percentiles (usec): 00:43:15.925 | 1.00th=[ 4948], 5.00th=[20579], 10.00th=[21365], 20.00th=[22152], 00:43:15.925 | 30.00th=[22938], 40.00th=[24249], 50.00th=[32375], 60.00th=[32637], 00:43:15.925 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[33817], 00:43:15.925 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:43:15.925 | 99.99th=[34866] 00:43:15.925 bw ( KiB/s): min= 1920, max= 2816, per=4.86%, avg=2270.00, stdev=285.23, samples=19 00:43:15.925 iops : min= 480, max= 704, avg=567.47, stdev=71.27, samples=19 00:43:15.925 lat (msec) : 4=0.09%, 10=1.32%, 20=2.25%, 50=96.34% 00:43:15.925 cpu : usr=98.79%, sys=0.93%, ctx=25, majf=0, minf=47 00:43:15.925 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:15.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.925 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.925 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.925 filename0: (groupid=0, jobs=1): err= 0: pid=2345566: Wed Nov 20 08:40:18 2024 00:43:15.925 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10011msec) 00:43:15.925 slat (nsec): min=5571, max=43439, avg=11072.16, stdev=6588.10 00:43:15.925 clat (usec): min=23658, max=81462, avg=33386.61, stdev=2922.18 00:43:15.925 lat (usec): min=23664, max=81470, avg=33397.69, stdev=2921.93 00:43:15.925 clat percentiles (usec): 00:43:15.925 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:43:15.925 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33424], 00:43:15.925 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:43:15.925 | 99.00th=[35390], 99.50th=[41157], 99.90th=[81265], 99.95th=[81265], 00:43:15.925 | 99.99th=[81265] 00:43:15.925 bw ( KiB/s): min= 1664, max= 2048, per=4.09%, avg=1907.20, stdev=82.01, samples=20 00:43:15.925 iops : min= 416, max= 512, avg=476.80, stdev=20.50, samples=20 00:43:15.925 lat (msec) : 50=99.67%, 100=0.33% 00:43:15.925 cpu : usr=98.91%, sys=0.78%, ctx=68, majf=0, minf=44 00:43:15.925 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:15.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.925 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.925 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.925 filename0: (groupid=0, jobs=1): err= 0: pid=2345567: Wed Nov 20 08:40:18 2024 00:43:15.925 read: IOPS=476, BW=1906KiB/s (1951kB/s)(18.7MiB/10042msec) 00:43:15.925 slat (nsec): min=5630, max=75027, avg=19928.97, stdev=12027.19 00:43:15.925 clat (msec): min=31, max=102, avg=33.40, stdev= 4.24 00:43:15.925 lat (msec): min=31, max=102, avg=33.42, stdev= 4.24 00:43:15.925 clat percentiles (msec): 00:43:15.925 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:15.925 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:43:15.925 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:43:15.925 | 99.00th=[ 36], 99.50th=[ 55], 99.90th=[ 103], 99.95th=[ 103], 00:43:15.925 | 99.99th=[ 103] 00:43:15.925 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1913.26, stdev=79.52, samples=19 00:43:15.925 iops : min= 448, max= 512, avg=478.32, stdev=19.88, samples=19 00:43:15.925 lat (msec) : 50=99.33%, 100=0.33%, 250=0.33% 00:43:15.925 cpu : usr=98.64%, sys=1.03%, ctx=57, majf=0, minf=45 00:43:15.925 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:15.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.925 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.925 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.925 filename0: (groupid=0, jobs=1): err= 0: pid=2345568: Wed Nov 20 08:40:18 2024 00:43:15.925 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.9MiB/10077msec) 00:43:15.925 slat (nsec): min=5561, max=70085, avg=13194.69, stdev=10374.29 00:43:15.925 clat (msec): min=12, max=102, avg=33.26, stdev= 4.31 00:43:15.925 lat (msec): min=12, max=102, avg=33.27, stdev= 4.31 00:43:15.925 clat percentiles (msec): 00:43:15.925 | 1.00th=[ 30], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:15.925 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:43:15.925 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:43:15.925 | 99.00th=[ 36], 99.50th=[ 37], 99.90th=[ 103], 99.95th=[ 103], 00:43:15.925 | 99.99th=[ 103] 00:43:15.925 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1926.40, stdev=65.33, samples=20 00:43:15.925 iops : min= 448, max= 512, avg=481.60, stdev=16.33, samples=20 00:43:15.925 lat (msec) : 20=0.66%, 50=99.01%, 250=0.33% 00:43:15.925 cpu : usr=98.99%, sys=0.72%, ctx=20, majf=0, minf=46 00:43:15.925 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:15.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.925 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.925 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.925 filename0: (groupid=0, jobs=1): err= 0: pid=2345569: Wed Nov 20 08:40:18 2024 00:43:15.925 read: IOPS=482, BW=1931KiB/s (1977kB/s)(19.0MiB/10076msec) 00:43:15.925 slat (nsec): min=5578, max=68024, avg=9554.33, stdev=6089.65 00:43:15.926 clat (msec): min=9, max=101, avg=33.06, stdev= 4.61 00:43:15.926 lat (msec): min=9, max=101, avg=33.07, stdev= 4.61 00:43:15.926 clat percentiles (msec): 00:43:15.926 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:15.926 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:43:15.926 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:43:15.926 | 99.00th=[ 35], 99.50th=[ 37], 99.90th=[ 103], 99.95th=[ 103], 00:43:15.926 | 99.99th=[ 103] 00:43:15.926 bw ( KiB/s): min= 1792, max= 2176, per=4.16%, avg=1939.20, stdev=85.87, samples=20 00:43:15.926 iops : min= 448, max= 544, avg=484.80, stdev=21.47, samples=20 00:43:15.926 lat (msec) : 10=0.04%, 20=0.62%, 50=99.01%, 250=0.33% 00:43:15.926 cpu : usr=98.91%, sys=0.73%, ctx=99, majf=0, minf=36 00:43:15.926 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:15.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.926 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.926 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.926 filename0: (groupid=0, jobs=1): err= 0: pid=2345570: Wed Nov 20 08:40:18 2024 00:43:15.926 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.9MiB/10088msec) 00:43:15.926 slat (nsec): min=5595, max=75092, avg=11950.27, stdev=9683.19 00:43:15.926 clat (msec): min=12, max=113, avg=33.16, stdev= 4.57 00:43:15.926 lat (msec): min=12, max=113, avg=33.18, stdev= 4.57 00:43:15.926 clat percentiles (msec): 00:43:15.926 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:15.926 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:43:15.926 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:43:15.926 | 99.00th=[ 36], 99.50th=[ 36], 99.90th=[ 103], 99.95th=[ 103], 00:43:15.926 | 99.99th=[ 114] 00:43:15.926 bw ( KiB/s): min= 1792, max= 2176, per=4.14%, avg=1932.80, stdev=70.72, samples=20 00:43:15.926 iops : min= 448, max= 544, avg=483.20, stdev=17.68, samples=20 00:43:15.926 lat (msec) : 20=0.66%, 50=99.01%, 250=0.33% 00:43:15.926 cpu : usr=99.04%, sys=0.69%, ctx=40, majf=0, minf=48 00:43:15.926 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:15.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.926 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.926 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.926 filename0: (groupid=0, jobs=1): err= 0: pid=2345571: Wed Nov 20 08:40:18 2024 00:43:15.926 read: IOPS=484, BW=1937KiB/s (1984kB/s)(19.0MiB/10043msec) 00:43:15.926 slat (nsec): min=5578, max=72847, avg=14934.33, stdev=10649.29 00:43:15.926 clat (usec): min=14609, max=81412, avg=32933.68, stdev=4926.20 00:43:15.926 lat (usec): min=14622, max=81418, avg=32948.61, stdev=4925.62 00:43:15.926 clat percentiles (usec): 00:43:15.926 | 1.00th=[21365], 5.00th=[26084], 10.00th=[27395], 20.00th=[32375], 00:43:15.926 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33817], 00:43:15.926 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[38536], 00:43:15.926 | 99.00th=[49546], 99.50th=[53216], 99.90th=[81265], 99.95th=[81265], 00:43:15.926 | 99.99th=[81265] 00:43:15.926 bw ( KiB/s): min= 1792, max= 2112, per=4.17%, avg=1947.11, stdev=79.39, samples=19 00:43:15.926 iops : min= 448, max= 528, avg=486.74, stdev=19.87, samples=19 00:43:15.926 lat (msec) : 20=0.33%, 50=98.73%, 100=0.95% 00:43:15.926 cpu : usr=98.87%, sys=0.85%, ctx=44, majf=0, minf=42 00:43:15.926 IO depths : 1=2.7%, 2=6.0%, 4=14.3%, 8=65.5%, 16=11.5%, 32=0.0%, >=64=0.0% 00:43:15.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.926 complete : 0=0.0%, 4=91.6%, 8=4.5%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.926 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.926 filename0: (groupid=0, jobs=1): err= 0: pid=2345572: Wed Nov 20 08:40:18 2024 00:43:15.926 read: IOPS=476, BW=1908KiB/s (1954kB/s)(18.8MiB/10063msec) 00:43:15.926 slat (nsec): min=5774, max=53960, avg=16013.16, stdev=8526.07 00:43:15.926 clat (msec): min=21, max=115, avg=33.36, stdev= 4.17 00:43:15.926 lat (msec): min=21, max=115, avg=33.37, stdev= 4.17 00:43:15.926 clat percentiles (msec): 00:43:15.926 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:15.926 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:43:15.926 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:43:15.926 | 99.00th=[ 36], 99.50th=[ 37], 99.90th=[ 103], 99.95th=[ 103], 00:43:15.926 | 99.99th=[ 116] 00:43:15.926 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1913.75, stdev=50.46, samples=20 00:43:15.926 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:43:15.926 lat (msec) : 50=99.67%, 250=0.33% 00:43:15.926 cpu : usr=98.91%, sys=0.74%, ctx=114, majf=0, minf=26 00:43:15.926 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:15.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.926 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.926 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.926 filename1: (groupid=0, jobs=1): err= 0: pid=2345573: Wed Nov 20 08:40:18 2024 00:43:15.926 read: IOPS=476, BW=1908KiB/s (1954kB/s)(18.7MiB/10043msec) 00:43:15.926 slat (nsec): min=5584, max=58524, avg=15281.72, stdev=9313.23 00:43:15.926 clat (usec): min=22431, max=81393, avg=33407.65, stdev=3644.76 00:43:15.926 lat (usec): min=22437, max=81418, avg=33422.94, stdev=3644.15 00:43:15.926 clat percentiles (usec): 00:43:15.926 | 1.00th=[27657], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:43:15.926 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33424], 00:43:15.926 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:43:15.926 | 99.00th=[44827], 99.50th=[61604], 99.90th=[81265], 99.95th=[81265], 00:43:15.926 | 99.99th=[81265] 00:43:15.926 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1915.79, stdev=55.90, samples=19 00:43:15.926 iops : min= 448, max= 512, avg=478.95, stdev=13.97, samples=19 00:43:15.926 lat (msec) : 50=99.16%, 100=0.84% 00:43:15.926 cpu : usr=98.75%, sys=0.91%, ctx=43, majf=0, minf=36 00:43:15.926 IO depths : 1=5.9%, 2=11.9%, 4=24.5%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:43:15.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.926 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.926 issued rwts: total=4790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.926 filename1: (groupid=0, jobs=1): err= 0: pid=2345574: Wed Nov 20 08:40:18 2024 00:43:15.926 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10011msec) 00:43:15.926 slat (nsec): min=5570, max=67978, avg=12540.73, stdev=9623.59 00:43:15.926 clat (usec): min=31753, max=81359, avg=33370.92, stdev=2908.03 00:43:15.926 lat (usec): min=31779, max=81367, avg=33383.46, stdev=2907.10 00:43:15.926 clat percentiles (usec): 00:43:15.926 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:43:15.926 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33424], 00:43:15.926 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:43:15.926 | 99.00th=[35390], 99.50th=[41157], 99.90th=[81265], 99.95th=[81265], 00:43:15.926 | 99.99th=[81265] 00:43:15.926 bw ( KiB/s): min= 1664, max= 2048, per=4.09%, avg=1907.20, stdev=82.01, samples=20 00:43:15.926 iops : min= 416, max= 512, avg=476.80, stdev=20.50, samples=20 00:43:15.926 lat (msec) : 50=99.67%, 100=0.33% 00:43:15.926 cpu : usr=98.86%, sys=0.87%, ctx=14, majf=0, minf=36 00:43:15.926 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:15.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.926 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.926 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.926 filename1: (groupid=0, jobs=1): err= 0: pid=2345575: Wed Nov 20 08:40:18 2024 00:43:15.926 read: IOPS=481, BW=1924KiB/s (1971kB/s)(18.9MiB/10077msec) 00:43:15.926 slat (nsec): min=5603, max=75026, avg=11948.47, stdev=8025.46 00:43:15.927 clat (msec): min=13, max=102, avg=33.16, stdev= 4.45 00:43:15.927 lat (msec): min=13, max=102, avg=33.17, stdev= 4.45 00:43:15.927 clat percentiles (msec): 00:43:15.927 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:15.927 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:43:15.927 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:43:15.927 | 99.00th=[ 36], 99.50th=[ 36], 99.90th=[ 103], 99.95th=[ 103], 00:43:15.927 | 99.99th=[ 103] 00:43:15.927 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1932.80, stdev=57.24, samples=20 00:43:15.927 iops : min= 448, max= 512, avg=483.20, stdev=14.31, samples=20 00:43:15.927 lat (msec) : 20=0.66%, 50=99.01%, 250=0.33% 00:43:15.927 cpu : usr=99.00%, sys=0.72%, ctx=14, majf=0, minf=34 00:43:15.927 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:15.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.927 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.927 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.927 filename1: (groupid=0, jobs=1): err= 0: pid=2345576: Wed Nov 20 08:40:18 2024 00:43:15.927 read: IOPS=476, BW=1906KiB/s (1951kB/s)(18.7MiB/10042msec) 00:43:15.927 slat (nsec): min=5599, max=69488, avg=18983.10, stdev=11115.36 00:43:15.927 clat (msec): min=23, max=102, avg=33.41, stdev= 4.24 00:43:15.927 lat (msec): min=23, max=102, avg=33.43, stdev= 4.24 00:43:15.927 clat percentiles (msec): 00:43:15.927 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:15.927 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:43:15.927 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:43:15.927 | 99.00th=[ 36], 99.50th=[ 55], 99.90th=[ 103], 99.95th=[ 103], 00:43:15.927 | 99.99th=[ 103] 00:43:15.927 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1913.26, stdev=79.52, samples=19 00:43:15.927 iops : min= 448, max= 512, avg=478.32, stdev=19.88, samples=19 00:43:15.927 lat (msec) : 50=99.33%, 100=0.33%, 250=0.33% 00:43:15.927 cpu : usr=98.69%, sys=0.97%, ctx=49, majf=0, minf=32 00:43:15.927 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:15.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.927 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.927 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.927 filename1: (groupid=0, jobs=1): err= 0: pid=2345577: Wed Nov 20 08:40:18 2024 00:43:15.927 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.7MiB/10043msec) 00:43:15.927 slat (nsec): min=5577, max=67819, avg=17276.05, stdev=10857.20 00:43:15.927 clat (usec): min=31662, max=81391, avg=33434.64, stdev=3364.62 00:43:15.927 lat (usec): min=31681, max=81397, avg=33451.92, stdev=3363.61 00:43:15.927 clat percentiles (usec): 00:43:15.927 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:43:15.927 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33424], 00:43:15.927 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:43:15.927 | 99.00th=[43254], 99.50th=[62129], 99.90th=[81265], 99.95th=[81265], 00:43:15.927 | 99.99th=[81265] 00:43:15.927 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1913.42, stdev=51.41, samples=19 00:43:15.927 iops : min= 448, max= 512, avg=478.32, stdev=12.95, samples=19 00:43:15.927 lat (msec) : 50=99.33%, 100=0.67% 00:43:15.927 cpu : usr=98.75%, sys=0.83%, ctx=127, majf=0, minf=47 00:43:15.927 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:15.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.927 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.927 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.927 filename1: (groupid=0, jobs=1): err= 0: pid=2345578: Wed Nov 20 08:40:18 2024 00:43:15.927 read: IOPS=591, BW=2366KiB/s (2423kB/s)(23.3MiB/10073msec) 00:43:15.927 slat (nsec): min=2806, max=58998, avg=7814.25, stdev=3626.65 00:43:15.927 clat (usec): min=1752, max=72516, avg=26975.50, stdev=6507.61 00:43:15.927 lat (usec): min=1758, max=72523, avg=26983.32, stdev=6508.48 00:43:15.927 clat percentiles (usec): 00:43:15.927 | 1.00th=[ 4490], 5.00th=[20579], 10.00th=[21103], 20.00th=[21890], 00:43:15.927 | 30.00th=[22414], 40.00th=[23462], 50.00th=[24249], 60.00th=[32375], 00:43:15.927 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:43:15.927 | 99.00th=[34341], 99.50th=[34341], 99.90th=[72877], 99.95th=[72877], 00:43:15.927 | 99.99th=[72877] 00:43:15.927 bw ( KiB/s): min= 1920, max= 2872, per=5.10%, avg=2377.20, stdev=313.74, samples=20 00:43:15.927 iops : min= 480, max= 718, avg=594.30, stdev=78.44, samples=20 00:43:15.927 lat (msec) : 2=0.12%, 4=0.50%, 10=1.38%, 20=1.34%, 50=96.39% 00:43:15.927 lat (msec) : 100=0.27% 00:43:15.927 cpu : usr=98.20%, sys=1.17%, ctx=234, majf=0, minf=60 00:43:15.927 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:15.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.927 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.927 issued rwts: total=5959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.927 filename1: (groupid=0, jobs=1): err= 0: pid=2345579: Wed Nov 20 08:40:18 2024 00:43:15.927 read: IOPS=477, BW=1910KiB/s (1956kB/s)(18.8MiB/10051msec) 00:43:15.927 slat (nsec): min=5746, max=72539, avg=16816.25, stdev=9860.97 00:43:15.927 clat (msec): min=27, max=102, avg=33.35, stdev= 4.05 00:43:15.927 lat (msec): min=27, max=102, avg=33.37, stdev= 4.05 00:43:15.927 clat percentiles (msec): 00:43:15.927 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:15.927 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:43:15.927 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:43:15.927 | 99.00th=[ 36], 99.50th=[ 37], 99.90th=[ 103], 99.95th=[ 103], 00:43:15.927 | 99.99th=[ 103] 00:43:15.927 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1913.60, stdev=50.44, samples=20 00:43:15.927 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:43:15.927 lat (msec) : 50=99.67%, 250=0.33% 00:43:15.927 cpu : usr=98.95%, sys=0.76%, ctx=41, majf=0, minf=44 00:43:15.927 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:15.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.927 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.927 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.927 filename1: (groupid=0, jobs=1): err= 0: pid=2345580: Wed Nov 20 08:40:18 2024 00:43:15.927 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.9MiB/10088msec) 00:43:15.927 slat (nsec): min=5575, max=72033, avg=16462.38, stdev=10740.91 00:43:15.927 clat (msec): min=13, max=113, avg=33.12, stdev= 4.52 00:43:15.927 lat (msec): min=13, max=113, avg=33.14, stdev= 4.52 00:43:15.927 clat percentiles (msec): 00:43:15.927 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:15.927 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:43:15.927 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:43:15.927 | 99.00th=[ 35], 99.50th=[ 36], 99.90th=[ 103], 99.95th=[ 103], 00:43:15.927 | 99.99th=[ 114] 00:43:15.927 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1932.80, stdev=57.24, samples=20 00:43:15.927 iops : min= 448, max= 512, avg=483.20, stdev=14.31, samples=20 00:43:15.927 lat (msec) : 20=0.66%, 50=99.01%, 250=0.33% 00:43:15.927 cpu : usr=99.05%, sys=0.69%, ctx=11, majf=0, minf=35 00:43:15.928 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:15.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.928 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.928 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.928 filename2: (groupid=0, jobs=1): err= 0: pid=2345581: Wed Nov 20 08:40:18 2024 00:43:15.928 read: IOPS=488, BW=1953KiB/s (2000kB/s)(19.2MiB/10094msec) 00:43:15.928 slat (nsec): min=5573, max=80023, avg=9751.93, stdev=7234.46 00:43:15.928 clat (msec): min=4, max=101, avg=32.69, stdev= 5.53 00:43:15.928 lat (msec): min=4, max=101, avg=32.70, stdev= 5.53 00:43:15.928 clat percentiles (msec): 00:43:15.928 | 1.00th=[ 9], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:15.928 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:43:15.928 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:43:15.928 | 99.00th=[ 36], 99.50th=[ 36], 99.90th=[ 103], 99.95th=[ 103], 00:43:15.928 | 99.99th=[ 103] 00:43:15.928 bw ( KiB/s): min= 1792, max= 2560, per=4.21%, avg=1964.80, stdev=151.31, samples=20 00:43:15.928 iops : min= 448, max= 640, avg=491.20, stdev=37.83, samples=20 00:43:15.928 lat (msec) : 10=1.30%, 20=1.01%, 50=97.36%, 250=0.32% 00:43:15.928 cpu : usr=98.98%, sys=0.75%, ctx=14, majf=0, minf=30 00:43:15.928 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:15.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.928 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.928 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.928 filename2: (groupid=0, jobs=1): err= 0: pid=2345582: Wed Nov 20 08:40:18 2024 00:43:15.928 read: IOPS=476, BW=1908KiB/s (1954kB/s)(18.7MiB/10043msec) 00:43:15.928 slat (nsec): min=5577, max=68478, avg=15604.51, stdev=10094.49 00:43:15.928 clat (msec): min=21, max=104, avg=33.36, stdev= 4.38 00:43:15.928 lat (msec): min=21, max=104, avg=33.38, stdev= 4.38 00:43:15.928 clat percentiles (msec): 00:43:15.928 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:15.928 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:43:15.928 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:43:15.928 | 99.00th=[ 44], 99.50th=[ 62], 99.90th=[ 105], 99.95th=[ 105], 00:43:15.928 | 99.99th=[ 105] 00:43:15.928 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1919.16, stdev=55.55, samples=19 00:43:15.928 iops : min= 448, max= 512, avg=479.79, stdev=13.89, samples=19 00:43:15.928 lat (msec) : 50=99.33%, 100=0.54%, 250=0.13% 00:43:15.928 cpu : usr=98.88%, sys=0.84%, ctx=14, majf=0, minf=27 00:43:15.928 IO depths : 1=5.4%, 2=11.0%, 4=22.5%, 8=53.6%, 16=7.5%, 32=0.0%, >=64=0.0% 00:43:15.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.928 complete : 0=0.0%, 4=93.5%, 8=1.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.928 issued rwts: total=4790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.928 filename2: (groupid=0, jobs=1): err= 0: pid=2345583: Wed Nov 20 08:40:18 2024 00:43:15.928 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10041msec) 00:43:15.928 slat (nsec): min=5582, max=73400, avg=16353.04, stdev=13230.94 00:43:15.928 clat (usec): min=18830, max=81470, avg=33206.99, stdev=4277.59 00:43:15.928 lat (usec): min=18850, max=81476, avg=33223.34, stdev=4276.58 00:43:15.928 clat percentiles (usec): 00:43:15.928 | 1.00th=[21365], 5.00th=[28181], 10.00th=[32113], 20.00th=[32375], 00:43:15.928 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:43:15.928 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:43:15.928 | 99.00th=[49021], 99.50th=[53216], 99.90th=[81265], 99.95th=[81265], 00:43:15.928 | 99.99th=[81265] 00:43:15.928 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1926.05, stdev=84.56, samples=19 00:43:15.928 iops : min= 448, max= 512, avg=481.47, stdev=21.21, samples=19 00:43:15.928 lat (msec) : 20=0.31%, 50=98.69%, 100=1.00% 00:43:15.928 cpu : usr=98.68%, sys=0.91%, ctx=52, majf=0, minf=29 00:43:15.928 IO depths : 1=4.7%, 2=10.3%, 4=23.0%, 8=54.1%, 16=7.9%, 32=0.0%, >=64=0.0% 00:43:15.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.928 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.928 issued rwts: total=4814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.928 filename2: (groupid=0, jobs=1): err= 0: pid=2345584: Wed Nov 20 08:40:18 2024 00:43:15.928 read: IOPS=477, BW=1910KiB/s (1955kB/s)(18.8MiB/10071msec) 00:43:15.928 slat (nsec): min=5581, max=76380, avg=16758.82, stdev=11556.46 00:43:15.928 clat (msec): min=20, max=101, avg=33.29, stdev= 4.39 00:43:15.928 lat (msec): min=20, max=101, avg=33.30, stdev= 4.39 00:43:15.928 clat percentiles (msec): 00:43:15.928 | 1.00th=[ 22], 5.00th=[ 28], 10.00th=[ 33], 20.00th=[ 33], 00:43:15.928 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:43:15.928 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 37], 00:43:15.928 | 99.00th=[ 47], 99.50th=[ 50], 99.90th=[ 82], 99.95th=[ 82], 00:43:15.928 | 99.99th=[ 103] 00:43:15.928 bw ( KiB/s): min= 1792, max= 2176, per=4.11%, avg=1916.80, stdev=101.27, samples=20 00:43:15.928 iops : min= 448, max= 544, avg=479.20, stdev=25.32, samples=20 00:43:15.928 lat (msec) : 50=99.50%, 100=0.46%, 250=0.04% 00:43:15.928 cpu : usr=97.41%, sys=1.70%, ctx=325, majf=0, minf=34 00:43:15.928 IO depths : 1=5.5%, 2=11.2%, 4=23.2%, 8=53.1%, 16=7.0%, 32=0.0%, >=64=0.0% 00:43:15.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.928 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.928 issued rwts: total=4808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.928 filename2: (groupid=0, jobs=1): err= 0: pid=2345585: Wed Nov 20 08:40:18 2024 00:43:15.928 read: IOPS=477, BW=1911KiB/s (1956kB/s)(18.8MiB/10049msec) 00:43:15.928 slat (nsec): min=5577, max=65796, avg=17963.77, stdev=11220.02 00:43:15.928 clat (msec): min=27, max=102, avg=33.32, stdev= 4.06 00:43:15.928 lat (msec): min=27, max=102, avg=33.33, stdev= 4.06 00:43:15.928 clat percentiles (msec): 00:43:15.928 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:15.928 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:43:15.928 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:43:15.928 | 99.00th=[ 35], 99.50th=[ 37], 99.90th=[ 103], 99.95th=[ 103], 00:43:15.928 | 99.99th=[ 103] 00:43:15.928 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1913.75, stdev=50.46, samples=20 00:43:15.928 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:43:15.928 lat (msec) : 50=99.67%, 250=0.33% 00:43:15.928 cpu : usr=98.77%, sys=0.85%, ctx=105, majf=0, minf=30 00:43:15.928 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:15.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.928 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.928 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.928 filename2: (groupid=0, jobs=1): err= 0: pid=2345586: Wed Nov 20 08:40:18 2024 00:43:15.928 read: IOPS=481, BW=1928KiB/s (1974kB/s)(19.0MiB/10089msec) 00:43:15.928 slat (nsec): min=5577, max=71787, avg=18007.80, stdev=11359.25 00:43:15.928 clat (msec): min=12, max=102, avg=33.04, stdev= 4.97 00:43:15.928 lat (msec): min=12, max=102, avg=33.06, stdev= 4.97 00:43:15.928 clat percentiles (msec): 00:43:15.929 | 1.00th=[ 21], 5.00th=[ 29], 10.00th=[ 33], 20.00th=[ 33], 00:43:15.929 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:43:15.929 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:43:15.929 | 99.00th=[ 46], 99.50th=[ 50], 99.90th=[ 103], 99.95th=[ 103], 00:43:15.929 | 99.99th=[ 103] 00:43:15.929 bw ( KiB/s): min= 1792, max= 2192, per=4.15%, avg=1938.40, stdev=75.81, samples=20 00:43:15.929 iops : min= 448, max= 548, avg=484.60, stdev=18.95, samples=20 00:43:15.929 lat (msec) : 20=0.95%, 50=98.60%, 100=0.12%, 250=0.33% 00:43:15.929 cpu : usr=98.40%, sys=1.11%, ctx=155, majf=0, minf=36 00:43:15.929 IO depths : 1=5.6%, 2=11.3%, 4=23.2%, 8=52.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:43:15.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.929 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.929 issued rwts: total=4862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.929 filename2: (groupid=0, jobs=1): err= 0: pid=2345587: Wed Nov 20 08:40:18 2024 00:43:15.929 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.7MiB/10043msec) 00:43:15.929 slat (nsec): min=5680, max=75839, avg=18761.17, stdev=11861.58 00:43:15.929 clat (msec): min=20, max=103, avg=33.42, stdev= 4.33 00:43:15.929 lat (msec): min=20, max=103, avg=33.44, stdev= 4.33 00:43:15.929 clat percentiles (msec): 00:43:15.929 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:15.929 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:43:15.929 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:43:15.929 | 99.00th=[ 36], 99.50th=[ 54], 99.90th=[ 103], 99.95th=[ 103], 00:43:15.929 | 99.99th=[ 105] 00:43:15.929 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1913.42, stdev=76.53, samples=19 00:43:15.929 iops : min= 448, max= 512, avg=478.32, stdev=19.20, samples=19 00:43:15.929 lat (msec) : 50=99.33%, 100=0.33%, 250=0.33% 00:43:15.929 cpu : usr=98.82%, sys=0.84%, ctx=66, majf=0, minf=34 00:43:15.929 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:43:15.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.929 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.929 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.929 filename2: (groupid=0, jobs=1): err= 0: pid=2345588: Wed Nov 20 08:40:18 2024 00:43:15.929 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10005msec) 00:43:15.929 slat (nsec): min=5587, max=71110, avg=15806.86, stdev=11315.14 00:43:15.929 clat (usec): min=16678, max=81249, avg=32725.46, stdev=4956.75 00:43:15.929 lat (usec): min=16688, max=81256, avg=32741.27, stdev=4956.31 00:43:15.929 clat percentiles (usec): 00:43:15.929 | 1.00th=[21365], 5.00th=[25035], 10.00th=[26870], 20.00th=[32113], 00:43:15.929 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33424], 00:43:15.929 | 70.00th=[33817], 80.00th=[33817], 90.00th=[36439], 95.00th=[39060], 00:43:15.929 | 99.00th=[46924], 99.50th=[52167], 99.90th=[81265], 99.95th=[81265], 00:43:15.929 | 99.99th=[81265] 00:43:15.929 bw ( KiB/s): min= 1792, max= 2096, per=4.20%, avg=1959.58, stdev=65.62, samples=19 00:43:15.929 iops : min= 448, max= 524, avg=489.89, stdev=16.40, samples=19 00:43:15.929 lat (msec) : 20=0.53%, 50=98.77%, 100=0.70% 00:43:15.929 cpu : usr=98.67%, sys=0.89%, ctx=41, majf=0, minf=81 00:43:15.929 IO depths : 1=2.0%, 2=4.6%, 4=11.5%, 8=69.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:43:15.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.929 complete : 0=0.0%, 4=90.9%, 8=5.6%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.929 issued rwts: total=4878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.929 00:43:15.929 Run status group 0 (all jobs): 00:43:15.929 READ: bw=45.6MiB/s (47.8MB/s), 1905KiB/s-2366KiB/s (1951kB/s-2423kB/s), io=460MiB (482MB), run=10005-10094msec 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.929 bdev_null0 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.929 08:40:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.929 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.929 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:15.929 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.929 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.930 [2024-11-20 08:40:19.021261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.930 bdev_null1 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:15.930 { 00:43:15.930 "params": { 00:43:15.930 "name": "Nvme$subsystem", 00:43:15.930 "trtype": "$TEST_TRANSPORT", 00:43:15.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:15.930 "adrfam": "ipv4", 00:43:15.930 "trsvcid": "$NVMF_PORT", 00:43:15.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:15.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:15.930 "hdgst": ${hdgst:-false}, 00:43:15.930 "ddgst": ${ddgst:-false} 00:43:15.930 }, 00:43:15.930 "method": "bdev_nvme_attach_controller" 00:43:15.930 } 00:43:15.930 EOF 00:43:15.930 )") 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:15.930 { 00:43:15.930 "params": { 00:43:15.930 "name": "Nvme$subsystem", 00:43:15.930 "trtype": "$TEST_TRANSPORT", 00:43:15.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:15.930 "adrfam": "ipv4", 00:43:15.930 "trsvcid": "$NVMF_PORT", 00:43:15.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:15.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:15.930 "hdgst": ${hdgst:-false}, 00:43:15.930 "ddgst": ${ddgst:-false} 00:43:15.930 }, 00:43:15.930 "method": "bdev_nvme_attach_controller" 00:43:15.930 } 00:43:15.930 EOF 00:43:15.930 )") 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:43:15.930 "params": { 00:43:15.930 "name": "Nvme0", 00:43:15.930 "trtype": "tcp", 00:43:15.930 "traddr": "10.0.0.2", 00:43:15.930 "adrfam": "ipv4", 00:43:15.930 "trsvcid": "4420", 00:43:15.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:15.930 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:15.930 "hdgst": false, 00:43:15.930 "ddgst": false 00:43:15.930 }, 00:43:15.930 "method": "bdev_nvme_attach_controller" 00:43:15.930 },{ 00:43:15.930 "params": { 00:43:15.930 "name": "Nvme1", 00:43:15.930 "trtype": "tcp", 00:43:15.930 "traddr": "10.0.0.2", 00:43:15.930 "adrfam": "ipv4", 00:43:15.930 "trsvcid": "4420", 00:43:15.930 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:15.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:15.930 "hdgst": false, 00:43:15.930 "ddgst": false 00:43:15.930 }, 00:43:15.930 "method": "bdev_nvme_attach_controller" 00:43:15.930 }' 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:15.930 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:15.931 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:15.931 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:15.931 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:15.931 08:40:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:15.931 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:15.931 ... 00:43:15.931 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:15.931 ... 00:43:15.931 fio-3.35 00:43:15.931 Starting 4 threads 00:43:21.213 00:43:21.213 filename0: (groupid=0, jobs=1): err= 0: pid=2347780: Wed Nov 20 08:40:25 2024 00:43:21.213 read: IOPS=2116, BW=16.5MiB/s (17.3MB/s)(82.7MiB/5002msec) 00:43:21.214 slat (nsec): min=5391, max=54186, avg=7993.26, stdev=2563.27 00:43:21.214 clat (usec): min=1440, max=6188, avg=3762.23, stdev=430.19 00:43:21.214 lat (usec): min=1446, max=6208, avg=3770.22, stdev=430.01 00:43:21.214 clat percentiles (usec): 00:43:21.214 | 1.00th=[ 3032], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3556], 00:43:21.214 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3785], 00:43:21.214 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 3949], 95.00th=[ 4817], 00:43:21.214 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 5997], 99.95th=[ 6063], 00:43:21.214 | 99.99th=[ 6194] 00:43:21.214 bw ( KiB/s): min=16352, max=17360, per=25.43%, avg=16993.78, stdev=378.07, samples=9 00:43:21.214 iops : min= 2044, max= 2170, avg=2124.22, stdev=47.26, samples=9 00:43:21.214 lat (msec) : 2=0.03%, 4=90.49%, 10=9.49% 00:43:21.214 cpu : usr=96.72%, sys=3.02%, ctx=6, majf=0, minf=79 00:43:21.214 IO depths : 1=0.1%, 2=0.1%, 4=65.8%, 8=34.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:21.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.214 complete : 0=0.0%, 4=97.8%, 8=2.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.214 issued rwts: total=10585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:21.214 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:21.214 filename0: (groupid=0, jobs=1): err= 0: pid=2347781: Wed Nov 20 08:40:25 2024 00:43:21.214 read: IOPS=2004, BW=15.7MiB/s (16.4MB/s)(78.3MiB/5001msec) 00:43:21.214 slat (nsec): min=5388, max=61460, avg=5880.72, stdev=1680.90 00:43:21.214 clat (usec): min=1441, max=6317, avg=3975.55, stdev=700.60 00:43:21.214 lat (usec): min=1446, max=6322, avg=3981.43, stdev=700.54 00:43:21.214 clat percentiles (usec): 00:43:21.214 | 1.00th=[ 3130], 5.00th=[ 3425], 10.00th=[ 3458], 20.00th=[ 3556], 00:43:21.214 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3720], 60.00th=[ 3785], 00:43:21.214 | 70.00th=[ 3818], 80.00th=[ 4015], 90.00th=[ 5473], 95.00th=[ 5538], 00:43:21.214 | 99.00th=[ 5932], 99.50th=[ 5997], 99.90th=[ 6259], 99.95th=[ 6325], 00:43:21.214 | 99.99th=[ 6325] 00:43:21.214 bw ( KiB/s): min=15744, max=16416, per=23.94%, avg=15996.44, stdev=206.42, samples=9 00:43:21.214 iops : min= 1968, max= 2052, avg=1999.56, stdev=25.80, samples=9 00:43:21.214 lat (msec) : 2=0.03%, 4=79.37%, 10=20.60% 00:43:21.214 cpu : usr=97.04%, sys=2.72%, ctx=7, majf=0, minf=79 00:43:21.214 IO depths : 1=0.1%, 2=0.1%, 4=72.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:21.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.214 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.214 issued rwts: total=10023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:21.214 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:21.214 filename1: (groupid=0, jobs=1): err= 0: pid=2347782: Wed Nov 20 08:40:25 2024 00:43:21.214 read: IOPS=2081, BW=16.3MiB/s (17.1MB/s)(81.3MiB/5002msec) 00:43:21.214 slat (nsec): min=5385, max=69027, avg=6151.29, stdev=2283.83 00:43:21.214 clat (usec): min=2296, max=6597, avg=3827.64, stdev=568.25 00:43:21.214 lat (usec): min=2306, max=6603, avg=3833.79, stdev=568.32 00:43:21.214 clat percentiles (usec): 00:43:21.214 | 1.00th=[ 2966], 5.00th=[ 3261], 10.00th=[ 3392], 20.00th=[ 3556], 00:43:21.214 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3785], 00:43:21.214 | 70.00th=[ 3818], 80.00th=[ 3851], 90.00th=[ 4621], 95.00th=[ 5473], 00:43:21.214 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 6063], 99.95th=[ 6325], 00:43:21.214 | 99.99th=[ 6587] 00:43:21.214 bw ( KiB/s): min=16128, max=17200, per=24.98%, avg=16691.56, stdev=344.44, samples=9 00:43:21.214 iops : min= 2016, max= 2150, avg=2086.44, stdev=43.06, samples=9 00:43:21.214 lat (msec) : 4=86.38%, 10=13.62% 00:43:21.214 cpu : usr=96.64%, sys=3.14%, ctx=6, majf=0, minf=77 00:43:21.214 IO depths : 1=0.1%, 2=0.1%, 4=68.6%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:21.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.214 complete : 0=0.0%, 4=95.8%, 8=4.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.214 issued rwts: total=10412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:21.214 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:21.214 filename1: (groupid=0, jobs=1): err= 0: pid=2347784: Wed Nov 20 08:40:25 2024 00:43:21.214 read: IOPS=2151, BW=16.8MiB/s (17.6MB/s)(84.1MiB/5002msec) 00:43:21.214 slat (nsec): min=5384, max=59327, avg=6178.84, stdev=2433.45 00:43:21.214 clat (usec): min=2054, max=5960, avg=3702.32, stdev=312.51 00:43:21.214 lat (usec): min=2060, max=5975, avg=3708.50, stdev=312.62 00:43:21.214 clat percentiles (usec): 00:43:21.214 | 1.00th=[ 3097], 5.00th=[ 3326], 10.00th=[ 3458], 20.00th=[ 3556], 00:43:21.214 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3785], 00:43:21.214 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 3851], 95.00th=[ 4113], 00:43:21.214 | 99.00th=[ 5342], 99.50th=[ 5538], 99.90th=[ 5669], 99.95th=[ 5735], 00:43:21.214 | 99.99th=[ 5932] 00:43:21.214 bw ( KiB/s): min=16832, max=17536, per=25.67%, avg=17155.67, stdev=244.07, samples=9 00:43:21.214 iops : min= 2104, max= 2192, avg=2144.44, stdev=30.53, samples=9 00:43:21.214 lat (msec) : 4=94.02%, 10=5.98% 00:43:21.214 cpu : usr=97.84%, sys=1.92%, ctx=8, majf=0, minf=93 00:43:21.214 IO depths : 1=0.1%, 2=0.1%, 4=70.5%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:21.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.214 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.214 issued rwts: total=10760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:21.214 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:21.214 00:43:21.214 Run status group 0 (all jobs): 00:43:21.214 READ: bw=65.3MiB/s (68.4MB/s), 15.7MiB/s-16.8MiB/s (16.4MB/s-17.6MB/s), io=326MiB (342MB), run=5001-5002msec 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.214 00:43:21.214 real 0m24.469s 00:43:21.214 user 5m18.306s 00:43:21.214 sys 0m4.502s 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:21.214 08:40:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.214 ************************************ 00:43:21.214 END TEST fio_dif_rand_params 00:43:21.214 ************************************ 00:43:21.214 08:40:25 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:43:21.214 08:40:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:21.214 08:40:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:21.214 08:40:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:21.214 ************************************ 00:43:21.214 START TEST fio_dif_digest 00:43:21.214 ************************************ 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:21.214 bdev_null0 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:21.214 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:21.215 [2024-11-20 08:40:25.672744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # config=() 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # local subsystem config 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:21.215 { 00:43:21.215 "params": { 00:43:21.215 "name": "Nvme$subsystem", 00:43:21.215 "trtype": "$TEST_TRANSPORT", 00:43:21.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:21.215 "adrfam": "ipv4", 00:43:21.215 "trsvcid": "$NVMF_PORT", 00:43:21.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:21.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:21.215 "hdgst": ${hdgst:-false}, 00:43:21.215 "ddgst": ${ddgst:-false} 00:43:21.215 }, 00:43:21.215 "method": "bdev_nvme_attach_controller" 00:43:21.215 } 00:43:21.215 EOF 00:43:21.215 )") 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # cat 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@396 -- # jq . 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@397 -- # IFS=, 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:43:21.215 "params": { 00:43:21.215 "name": "Nvme0", 00:43:21.215 "trtype": "tcp", 00:43:21.215 "traddr": "10.0.0.2", 00:43:21.215 "adrfam": "ipv4", 00:43:21.215 "trsvcid": "4420", 00:43:21.215 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:21.215 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:21.215 "hdgst": true, 00:43:21.215 "ddgst": true 00:43:21.215 }, 00:43:21.215 "method": "bdev_nvme_attach_controller" 00:43:21.215 }' 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:21.215 08:40:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:21.474 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:21.474 ... 00:43:21.474 fio-3.35 00:43:21.474 Starting 3 threads 00:43:33.701 00:43:33.701 filename0: (groupid=0, jobs=1): err= 0: pid=2349284: Wed Nov 20 08:40:36 2024 00:43:33.701 read: IOPS=258, BW=32.4MiB/s (33.9MB/s)(325MiB/10047msec) 00:43:33.701 slat (nsec): min=5844, max=35282, avg=6533.85, stdev=1092.73 00:43:33.701 clat (usec): min=7040, max=51200, avg=11558.97, stdev=1325.47 00:43:33.701 lat (usec): min=7046, max=51206, avg=11565.50, stdev=1325.49 00:43:33.701 clat percentiles (usec): 00:43:33.701 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:43:33.701 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:43:33.701 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12518], 95.00th=[12780], 00:43:33.701 | 99.00th=[13173], 99.50th=[13435], 99.90th=[13829], 99.95th=[48497], 00:43:33.701 | 99.99th=[51119] 00:43:33.701 bw ( KiB/s): min=32256, max=34560, per=39.01%, avg=33280.00, stdev=505.22, samples=20 00:43:33.701 iops : min= 252, max= 270, avg=260.00, stdev= 3.95, samples=20 00:43:33.701 lat (msec) : 10=2.88%, 20=97.04%, 50=0.04%, 100=0.04% 00:43:33.701 cpu : usr=95.88%, sys=3.89%, ctx=13, majf=0, minf=121 00:43:33.701 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:33.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:33.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:33.701 issued rwts: total=2602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:33.701 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:33.701 filename0: (groupid=0, jobs=1): err= 0: pid=2349285: Wed Nov 20 08:40:36 2024 00:43:33.701 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(252MiB/10003msec) 00:43:33.701 slat (nsec): min=5800, max=31881, avg=7562.44, stdev=1800.15 00:43:33.701 clat (usec): min=11170, max=54752, avg=14875.04, stdev=1836.95 00:43:33.701 lat (usec): min=11176, max=54759, avg=14882.60, stdev=1836.97 00:43:33.701 clat percentiles (usec): 00:43:33.701 | 1.00th=[12518], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:43:33.701 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:43:33.701 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:43:33.701 | 99.00th=[17695], 99.50th=[18220], 99.90th=[53216], 99.95th=[53740], 00:43:33.701 | 99.99th=[54789] 00:43:33.701 bw ( KiB/s): min=23808, max=26164, per=30.23%, avg=25791.37, stdev=533.71, samples=19 00:43:33.701 iops : min= 186, max= 204, avg=201.47, stdev= 4.15, samples=19 00:43:33.701 lat (msec) : 20=99.85%, 100=0.15% 00:43:33.701 cpu : usr=95.37%, sys=4.39%, ctx=16, majf=0, minf=112 00:43:33.701 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:33.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:33.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:33.701 issued rwts: total=2016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:33.701 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:33.701 filename0: (groupid=0, jobs=1): err= 0: pid=2349286: Wed Nov 20 08:40:36 2024 00:43:33.701 read: IOPS=207, BW=26.0MiB/s (27.2MB/s)(260MiB/10004msec) 00:43:33.701 slat (nsec): min=5921, max=53270, avg=7908.05, stdev=2364.57 00:43:33.701 clat (usec): min=8697, max=18446, avg=14432.08, stdev=1046.70 00:43:33.701 lat (usec): min=8704, max=18453, avg=14439.99, stdev=1046.65 00:43:33.701 clat percentiles (usec): 00:43:33.701 | 1.00th=[11731], 5.00th=[12780], 10.00th=[13173], 20.00th=[13566], 00:43:33.701 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:43:33.701 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15664], 95.00th=[16057], 00:43:33.701 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17957], 99.95th=[18220], 00:43:33.701 | 99.99th=[18482] 00:43:33.701 bw ( KiB/s): min=26112, max=27392, per=31.16%, avg=26583.58, stdev=393.49, samples=19 00:43:33.701 iops : min= 204, max= 214, avg=207.68, stdev= 3.07, samples=19 00:43:33.701 lat (msec) : 10=0.24%, 20=99.76% 00:43:33.701 cpu : usr=90.60%, sys=7.15%, ctx=651, majf=0, minf=142 00:43:33.701 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:33.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:33.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:33.701 issued rwts: total=2078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:33.701 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:33.701 00:43:33.701 Run status group 0 (all jobs): 00:43:33.701 READ: bw=83.3MiB/s (87.4MB/s), 25.2MiB/s-32.4MiB/s (26.4MB/s-33.9MB/s), io=837MiB (878MB), run=10003-10047msec 00:43:33.701 08:40:36 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:43:33.701 08:40:36 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:43:33.701 08:40:36 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:43:33.701 08:40:36 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:33.701 08:40:36 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:43:33.701 08:40:36 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:33.701 08:40:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.701 08:40:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:33.701 08:40:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.701 08:40:36 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:33.701 08:40:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.701 08:40:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:33.701 08:40:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.701 00:43:33.701 real 0m11.342s 00:43:33.701 user 0m43.041s 00:43:33.701 sys 0m1.960s 00:43:33.701 08:40:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:33.701 08:40:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:33.701 ************************************ 00:43:33.702 END TEST fio_dif_digest 00:43:33.702 ************************************ 00:43:33.702 08:40:37 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:43:33.702 08:40:37 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:43:33.702 08:40:37 nvmf_dif -- nvmf/common.sh@335 -- # nvmfcleanup 00:43:33.702 08:40:37 nvmf_dif -- nvmf/common.sh@99 -- # sync 00:43:33.702 08:40:37 nvmf_dif -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:43:33.702 08:40:37 nvmf_dif -- nvmf/common.sh@102 -- # set +e 00:43:33.702 08:40:37 nvmf_dif -- nvmf/common.sh@103 -- # for i in {1..20} 00:43:33.702 08:40:37 nvmf_dif -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:43:33.702 rmmod nvme_tcp 00:43:33.702 rmmod nvme_fabrics 00:43:33.702 rmmod nvme_keyring 00:43:33.702 08:40:37 nvmf_dif -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:43:33.702 08:40:37 nvmf_dif -- nvmf/common.sh@106 -- # set -e 00:43:33.702 08:40:37 nvmf_dif -- nvmf/common.sh@107 -- # return 0 00:43:33.702 08:40:37 nvmf_dif -- nvmf/common.sh@336 -- # '[' -n 2338939 ']' 00:43:33.702 08:40:37 nvmf_dif -- nvmf/common.sh@337 -- # killprocess 2338939 00:43:33.702 08:40:37 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2338939 ']' 00:43:33.702 08:40:37 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2338939 00:43:33.702 08:40:37 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:43:33.702 08:40:37 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:33.702 08:40:37 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2338939 00:43:33.702 08:40:37 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:33.702 08:40:37 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:33.702 08:40:37 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2338939' 00:43:33.702 killing process with pid 2338939 00:43:33.702 08:40:37 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2338939 00:43:33.702 08:40:37 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2338939 00:43:33.702 08:40:37 nvmf_dif -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:43:33.702 08:40:37 nvmf_dif -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:37.013 Waiting for block devices as requested 00:43:37.013 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:43:37.013 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:43:37.013 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:43:37.013 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:43:37.013 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:43:37.013 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:43:37.013 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:43:37.013 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:43:37.013 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:43:37.274 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:43:37.274 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:43:37.535 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:43:37.535 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:43:37.535 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:43:37.535 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:43:37.795 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:43:37.795 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:43:38.056 08:40:42 nvmf_dif -- nvmf/common.sh@342 -- # nvmf_fini 00:43:38.057 08:40:42 nvmf_dif -- nvmf/setup.sh@254 -- # local dev 00:43:38.057 08:40:42 nvmf_dif -- nvmf/setup.sh@257 -- # remove_target_ns 00:43:38.057 08:40:42 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:43:38.057 08:40:42 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:43:38.057 08:40:42 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:43:39.968 08:40:44 nvmf_dif -- nvmf/setup.sh@258 -- # delete_main_bridge 00:43:39.968 08:40:44 nvmf_dif -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:43:39.968 08:40:44 nvmf_dif -- nvmf/setup.sh@121 -- # return 0 00:43:39.968 08:40:44 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:43:39.968 08:40:44 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:43:39.968 08:40:44 nvmf_dif -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:43:39.968 08:40:44 nvmf_dif -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:43:39.968 08:40:44 nvmf_dif -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:43:39.968 08:40:44 nvmf_dif -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:43:39.968 08:40:44 nvmf_dif -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:43:39.968 08:40:44 nvmf_dif -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:43:40.229 08:40:44 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:43:40.229 08:40:44 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:43:40.229 08:40:44 nvmf_dif -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:43:40.229 08:40:44 nvmf_dif -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:43:40.229 08:40:44 nvmf_dif -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:43:40.229 08:40:44 nvmf_dif -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:43:40.229 08:40:44 nvmf_dif -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:43:40.229 08:40:44 nvmf_dif -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:43:40.229 08:40:44 nvmf_dif -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:43:40.229 08:40:44 nvmf_dif -- nvmf/setup.sh@41 -- # _dev=0 00:43:40.229 08:40:44 nvmf_dif -- nvmf/setup.sh@41 -- # dev_map=() 00:43:40.229 08:40:44 nvmf_dif -- nvmf/setup.sh@274 -- # iptr 00:43:40.229 08:40:44 nvmf_dif -- nvmf/common.sh@548 -- # iptables-save 00:43:40.229 08:40:44 nvmf_dif -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:43:40.229 08:40:44 nvmf_dif -- nvmf/common.sh@548 -- # iptables-restore 00:43:40.229 00:43:40.229 real 1m19.803s 00:43:40.229 user 8m1.080s 00:43:40.229 sys 0m22.957s 00:43:40.229 08:40:44 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:40.229 08:40:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:40.229 ************************************ 00:43:40.229 END TEST nvmf_dif 00:43:40.229 ************************************ 00:43:40.229 08:40:44 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:40.229 08:40:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:40.229 08:40:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:40.229 08:40:44 -- common/autotest_common.sh@10 -- # set +x 00:43:40.229 ************************************ 00:43:40.229 START TEST nvmf_abort_qd_sizes 00:43:40.229 ************************************ 00:43:40.230 08:40:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:40.230 * Looking for test storage... 00:43:40.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:40.230 08:40:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:40.230 08:40:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:43:40.230 08:40:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:40.491 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:40.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.492 --rc genhtml_branch_coverage=1 00:43:40.492 --rc genhtml_function_coverage=1 00:43:40.492 --rc genhtml_legend=1 00:43:40.492 --rc geninfo_all_blocks=1 00:43:40.492 --rc geninfo_unexecuted_blocks=1 00:43:40.492 00:43:40.492 ' 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:40.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.492 --rc genhtml_branch_coverage=1 00:43:40.492 --rc genhtml_function_coverage=1 00:43:40.492 --rc genhtml_legend=1 00:43:40.492 --rc geninfo_all_blocks=1 00:43:40.492 --rc geninfo_unexecuted_blocks=1 00:43:40.492 00:43:40.492 ' 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:40.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.492 --rc genhtml_branch_coverage=1 00:43:40.492 --rc genhtml_function_coverage=1 00:43:40.492 --rc genhtml_legend=1 00:43:40.492 --rc geninfo_all_blocks=1 00:43:40.492 --rc geninfo_unexecuted_blocks=1 00:43:40.492 00:43:40.492 ' 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:40.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.492 --rc genhtml_branch_coverage=1 00:43:40.492 --rc genhtml_function_coverage=1 00:43:40.492 --rc genhtml_legend=1 00:43:40.492 --rc geninfo_all_blocks=1 00:43:40.492 --rc geninfo_unexecuted_blocks=1 00:43:40.492 00:43:40.492 ' 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:40.492 08:40:44 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@50 -- # : 0 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:43:40.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@54 -- # have_pci_nics=0 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # prepare_net_devs 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # local -g is_hw=no 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # remove_target_ns 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:43:40.492 08:40:45 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # xtrace_disable 00:43:40.493 08:40:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # pci_devs=() 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # local -a pci_devs 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # pci_net_devs=() 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # pci_drivers=() 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # local -A pci_drivers 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # net_devs=() 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # local -ga net_devs 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # e810=() 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # local -ga e810 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # x722=() 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # local -ga x722 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # mlx=() 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # local -ga mlx 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:48.631 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:48.631 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:48.631 Found net devices under 0000:31:00.0: cvl_0_0 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:48.631 Found net devices under 0000:31:00.1: cvl_0_1 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # is_hw=yes 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:43:48.631 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@247 -- # create_target_ns 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@27 -- # local -gA dev_map 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@28 -- # local -g _dev 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772161 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:43:48.632 10.0.0.1 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772162 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:43:48.632 10.0.0.2 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@38 -- # ping_ips 1 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:43:48.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:48.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.579 ms 00:43:48.632 00:43:48.632 --- 10.0.0.1 ping statistics --- 00:43:48.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:48.632 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:43:48.632 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:43:48.633 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:43:48.633 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:43:48.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:48.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:43:48.633 00:43:48.633 --- 10.0.0.2 ping statistics --- 00:43:48.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:48.633 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:43:48.633 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair++ )) 00:43:48.633 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:43:48.633 08:40:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:48.633 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # return 0 00:43:48.633 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:43:48.633 08:40:52 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:52.844 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:43:52.844 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:43:52.844 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator1 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # return 1 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev= 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@160 -- # return 0 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target1 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target1 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # return 1 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev= 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@160 -- # return 0 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # nvmfpid=2359644 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # waitforlisten 2359644 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2359644 ']' 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:52.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:52.845 08:40:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:52.845 [2024-11-20 08:40:57.522424] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:43:52.845 [2024-11-20 08:40:57.522473] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:53.135 [2024-11-20 08:40:57.605751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:53.135 [2024-11-20 08:40:57.642680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:53.135 [2024-11-20 08:40:57.642712] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:53.135 [2024-11-20 08:40:57.642720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:53.135 [2024-11-20 08:40:57.642727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:53.135 [2024-11-20 08:40:57.642733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:53.135 [2024-11-20 08:40:57.644263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:53.135 [2024-11-20 08:40:57.644376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:53.135 [2024-11-20 08:40:57.644529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:53.135 [2024-11-20 08:40:57.644530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:53.761 08:40:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:53.761 ************************************ 00:43:53.761 START TEST spdk_target_abort 00:43:53.761 ************************************ 00:43:53.761 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:43:53.761 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:53.761 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:43:53.761 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.761 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:54.023 spdk_targetn1 00:43:54.023 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:54.023 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:54.023 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:54.023 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:54.023 [2024-11-20 08:40:58.735888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:54.023 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:54.023 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:54.023 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:54.023 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:54.284 [2024-11-20 08:40:58.784228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:54.284 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:54.285 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:54.285 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:54.285 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:54.285 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:54.285 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:54.285 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:54.285 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:54.285 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:54.285 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:54.285 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:54.285 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:54.285 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:54.285 08:40:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:54.285 [2024-11-20 08:40:58.989358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:560 len:8 PRP1 0x200004abe000 PRP2 0x0 00:43:54.285 [2024-11-20 08:40:58.989387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0047 p:1 m:0 dnr:0 00:43:54.285 [2024-11-20 08:40:58.997312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:848 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:43:54.285 [2024-11-20 08:40:58.997328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:006c p:1 m:0 dnr:0 00:43:54.544 [2024-11-20 08:40:59.013323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1432 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:43:54.544 [2024-11-20 08:40:59.013340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00b5 p:1 m:0 dnr:0 00:43:54.544 [2024-11-20 08:40:59.021915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1760 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:43:54.544 [2024-11-20 08:40:59.021931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00dd p:1 m:0 dnr:0 00:43:54.544 [2024-11-20 08:40:59.037303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2240 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:43:54.545 [2024-11-20 08:40:59.037319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:54.545 [2024-11-20 08:40:59.045344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2520 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:43:54.545 [2024-11-20 08:40:59.045359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:54.545 [2024-11-20 08:40:59.062705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3136 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:43:54.545 [2024-11-20 08:40:59.062721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:008c p:0 m:0 dnr:0 00:43:54.545 [2024-11-20 08:40:59.069368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3344 len:8 PRP1 0x200004abe000 PRP2 0x0 00:43:54.545 [2024-11-20 08:40:59.069387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00a4 p:0 m:0 dnr:0 00:43:57.867 Initializing NVMe Controllers 00:43:57.867 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:57.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:57.867 Initialization complete. Launching workers. 00:43:57.867 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12660, failed: 8 00:43:57.867 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2682, failed to submit 9986 00:43:57.867 success 782, unsuccessful 1900, failed 0 00:43:57.867 08:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:57.867 08:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:57.867 [2024-11-20 08:41:02.208086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:640 len:8 PRP1 0x200004e58000 PRP2 0x0 00:43:57.867 [2024-11-20 08:41:02.208126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:43:57.867 [2024-11-20 08:41:02.257873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:1752 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:43:57.867 [2024-11-20 08:41:02.257899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00e1 p:1 m:0 dnr:0 00:43:57.867 [2024-11-20 08:41:02.266113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:1944 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:43:57.867 [2024-11-20 08:41:02.266138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:00f9 p:1 m:0 dnr:0 00:43:57.867 [2024-11-20 08:41:02.281979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:2312 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:43:57.867 [2024-11-20 08:41:02.282002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:57.868 [2024-11-20 08:41:02.289848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:2448 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:43:57.868 [2024-11-20 08:41:02.289876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:57.868 [2024-11-20 08:41:02.297811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:2616 len:8 PRP1 0x200004e46000 PRP2 0x0 00:43:57.868 [2024-11-20 08:41:02.297832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:57.868 [2024-11-20 08:41:02.313972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:2912 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:43:57.868 [2024-11-20 08:41:02.313994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:57.868 [2024-11-20 08:41:02.330000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:3328 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:43:57.868 [2024-11-20 08:41:02.330021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:00b3 p:0 m:0 dnr:0 00:43:59.253 [2024-11-20 08:41:03.799796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:37048 len:8 PRP1 0x200004e56000 PRP2 0x0 00:43:59.253 [2024-11-20 08:41:03.799833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:001c p:1 m:0 dnr:0 00:43:59.513 [2024-11-20 08:41:04.011139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:41624 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:43:59.513 [2024-11-20 08:41:04.011169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:44:00.453 [2024-11-20 08:41:05.154214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:66920 len:8 PRP1 0x200004e40000 PRP2 0x0 00:44:00.453 [2024-11-20 08:41:05.154252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:00ae p:1 m:0 dnr:0 00:44:00.714 Initializing NVMe Controllers 00:44:00.714 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:00.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:00.714 Initialization complete. Launching workers. 00:44:00.714 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8443, failed: 11 00:44:00.714 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1197, failed to submit 7257 00:44:00.714 success 365, unsuccessful 832, failed 0 00:44:00.714 08:41:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:00.714 08:41:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:04.010 Initializing NVMe Controllers 00:44:04.010 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:04.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:04.010 Initialization complete. Launching workers. 00:44:04.010 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41961, failed: 0 00:44:04.010 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2722, failed to submit 39239 00:44:04.010 success 579, unsuccessful 2143, failed 0 00:44:04.010 08:41:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:04.010 08:41:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:04.010 08:41:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:04.010 08:41:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:04.010 08:41:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:04.010 08:41:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:04.010 08:41:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:05.923 08:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.923 08:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2359644 00:44:05.923 08:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2359644 ']' 00:44:05.923 08:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2359644 00:44:05.923 08:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:44:05.923 08:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:05.923 08:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2359644 00:44:05.923 08:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:05.923 08:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:05.923 08:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2359644' 00:44:05.924 killing process with pid 2359644 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2359644 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2359644 00:44:05.924 00:44:05.924 real 0m12.120s 00:44:05.924 user 0m49.606s 00:44:05.924 sys 0m1.771s 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:05.924 ************************************ 00:44:05.924 END TEST spdk_target_abort 00:44:05.924 ************************************ 00:44:05.924 08:41:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:44:05.924 08:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:05.924 08:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:05.924 08:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:05.924 ************************************ 00:44:05.924 START TEST kernel_target_abort 00:44:05.924 ************************************ 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@441 -- # local block nvme 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:44:05.924 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@444 -- # modprobe nvmet 00:44:06.185 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:06.185 08:41:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:10.390 Waiting for block devices as requested 00:44:10.390 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:44:10.390 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:44:10.390 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:44:10.390 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:44:10.390 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:44:10.390 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:44:10.390 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:44:10.390 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:44:10.650 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:44:10.650 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:44:10.650 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:44:10.912 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:44:10.912 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:44:10.912 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:44:11.172 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:44:11.172 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:44:11.172 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:44:11.433 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:44:11.433 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:11.433 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:44:11.433 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:44:11.433 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:11.433 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:44:11.433 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:44:11.433 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:44:11.433 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:44:11.695 No valid GPT data, bailing 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@469 -- # echo 1 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@471 -- # echo 1 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@474 -- # echo tcp 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@475 -- # echo 4420 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@476 -- # echo ipv4 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:44:11.695 00:44:11.695 Discovery Log Number of Records 2, Generation counter 2 00:44:11.695 =====Discovery Log Entry 0====== 00:44:11.695 trtype: tcp 00:44:11.695 adrfam: ipv4 00:44:11.695 subtype: current discovery subsystem 00:44:11.695 treq: not specified, sq flow control disable supported 00:44:11.695 portid: 1 00:44:11.695 trsvcid: 4420 00:44:11.695 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:44:11.695 traddr: 10.0.0.1 00:44:11.695 eflags: none 00:44:11.695 sectype: none 00:44:11.695 =====Discovery Log Entry 1====== 00:44:11.695 trtype: tcp 00:44:11.695 adrfam: ipv4 00:44:11.695 subtype: nvme subsystem 00:44:11.695 treq: not specified, sq flow control disable supported 00:44:11.695 portid: 1 00:44:11.695 trsvcid: 4420 00:44:11.695 subnqn: nqn.2016-06.io.spdk:testnqn 00:44:11.695 traddr: 10.0.0.1 00:44:11.695 eflags: none 00:44:11.695 sectype: none 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:11.695 08:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:14.996 Initializing NVMe Controllers 00:44:14.996 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:14.996 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:14.996 Initialization complete. Launching workers. 00:44:14.996 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67015, failed: 0 00:44:14.996 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67015, failed to submit 0 00:44:14.996 success 0, unsuccessful 67015, failed 0 00:44:14.996 08:41:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:14.996 08:41:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:18.297 Initializing NVMe Controllers 00:44:18.297 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:18.297 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:18.297 Initialization complete. Launching workers. 00:44:18.297 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107404, failed: 0 00:44:18.297 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27030, failed to submit 80374 00:44:18.297 success 0, unsuccessful 27030, failed 0 00:44:18.297 08:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:18.297 08:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:21.598 Initializing NVMe Controllers 00:44:21.598 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:21.598 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:21.598 Initialization complete. Launching workers. 00:44:21.598 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100718, failed: 0 00:44:21.598 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25170, failed to submit 75548 00:44:21.598 success 0, unsuccessful 25170, failed 0 00:44:21.598 08:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:44:21.598 08:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:44:21.598 08:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@488 -- # echo 0 00:44:21.598 08:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:21.598 08:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:21.598 08:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:44:21.598 08:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:21.598 08:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:44:21.598 08:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:44:21.598 08:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:24.145 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:44:24.145 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:44:24.145 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:44:24.145 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:44:24.145 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:44:24.145 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:44:24.145 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:44:24.145 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:44:24.145 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:44:24.145 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:44:24.145 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:44:24.145 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:44:24.405 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:44:24.405 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:44:24.405 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:44:24.405 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:44:26.320 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:44:26.320 00:44:26.320 real 0m20.416s 00:44:26.320 user 0m9.568s 00:44:26.320 sys 0m6.343s 00:44:26.320 08:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:26.320 08:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:26.320 ************************************ 00:44:26.320 END TEST kernel_target_abort 00:44:26.320 ************************************ 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # nvmfcleanup 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- nvmf/common.sh@99 -- # sync 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- nvmf/common.sh@102 -- # set +e 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- nvmf/common.sh@103 -- # for i in {1..20} 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:44:26.582 rmmod nvme_tcp 00:44:26.582 rmmod nvme_fabrics 00:44:26.582 rmmod nvme_keyring 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- nvmf/common.sh@106 -- # set -e 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- nvmf/common.sh@107 -- # return 0 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # '[' -n 2359644 ']' 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- nvmf/common.sh@337 -- # killprocess 2359644 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2359644 ']' 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2359644 00:44:26.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2359644) - No such process 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2359644 is not found' 00:44:26.582 Process with pid 2359644 is not found 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:44:26.582 08:41:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:30.785 Waiting for block devices as requested 00:44:30.785 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:44:30.785 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:44:30.785 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:44:30.785 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:44:30.785 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:44:30.785 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:44:30.785 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:44:30.785 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:44:31.112 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:44:31.112 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:44:31.112 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:44:31.385 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:44:31.385 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:44:31.385 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:44:31.385 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:44:31.646 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:44:31.646 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:44:31.907 08:41:36 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # nvmf_fini 00:44:31.908 08:41:36 nvmf_abort_qd_sizes -- nvmf/setup.sh@254 -- # local dev 00:44:31.908 08:41:36 nvmf_abort_qd_sizes -- nvmf/setup.sh@257 -- # remove_target_ns 00:44:31.908 08:41:36 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:44:31.908 08:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:44:31.908 08:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@258 -- # delete_main_bridge 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # return 0 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # _dev=0 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # dev_map=() 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@274 -- # iptr 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-save 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-restore 00:44:34.448 00:44:34.448 real 0m53.850s 00:44:34.448 user 1m5.147s 00:44:34.448 sys 0m20.169s 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:34.448 08:41:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:34.448 ************************************ 00:44:34.448 END TEST nvmf_abort_qd_sizes 00:44:34.448 ************************************ 00:44:34.448 08:41:38 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:34.448 08:41:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:34.448 08:41:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:34.448 08:41:38 -- common/autotest_common.sh@10 -- # set +x 00:44:34.448 ************************************ 00:44:34.448 START TEST keyring_file 00:44:34.448 ************************************ 00:44:34.448 08:41:38 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:34.448 * Looking for test storage... 00:44:34.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:34.448 08:41:38 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:44:34.448 08:41:38 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:44:34.448 08:41:38 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:44:34.448 08:41:38 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@345 -- # : 1 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@353 -- # local d=1 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@355 -- # echo 1 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@353 -- # local d=2 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@355 -- # echo 2 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:34.448 08:41:38 keyring_file -- scripts/common.sh@368 -- # return 0 00:44:34.448 08:41:38 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:34.448 08:41:38 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:44:34.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:34.448 --rc genhtml_branch_coverage=1 00:44:34.448 --rc genhtml_function_coverage=1 00:44:34.448 --rc genhtml_legend=1 00:44:34.448 --rc geninfo_all_blocks=1 00:44:34.448 --rc geninfo_unexecuted_blocks=1 00:44:34.448 00:44:34.448 ' 00:44:34.448 08:41:38 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:44:34.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:34.448 --rc genhtml_branch_coverage=1 00:44:34.448 --rc genhtml_function_coverage=1 00:44:34.448 --rc genhtml_legend=1 00:44:34.448 --rc geninfo_all_blocks=1 00:44:34.448 --rc geninfo_unexecuted_blocks=1 00:44:34.448 00:44:34.448 ' 00:44:34.449 08:41:38 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:44:34.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:34.449 --rc genhtml_branch_coverage=1 00:44:34.449 --rc genhtml_function_coverage=1 00:44:34.449 --rc genhtml_legend=1 00:44:34.449 --rc geninfo_all_blocks=1 00:44:34.449 --rc geninfo_unexecuted_blocks=1 00:44:34.449 00:44:34.449 ' 00:44:34.449 08:41:38 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:44:34.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:34.449 --rc genhtml_branch_coverage=1 00:44:34.449 --rc genhtml_function_coverage=1 00:44:34.449 --rc genhtml_legend=1 00:44:34.449 --rc geninfo_all_blocks=1 00:44:34.449 --rc geninfo_unexecuted_blocks=1 00:44:34.449 00:44:34.449 ' 00:44:34.449 08:41:38 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:34.449 08:41:38 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:44:34.449 08:41:38 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:34.449 08:41:38 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:34.449 08:41:38 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:34.449 08:41:38 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:34.449 08:41:38 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:34.449 08:41:38 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:34.449 08:41:38 keyring_file -- paths/export.sh@5 -- # export PATH 00:44:34.449 08:41:38 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:44:34.449 08:41:38 keyring_file -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:44:34.449 08:41:38 keyring_file -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:44:34.449 08:41:38 keyring_file -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@50 -- # : 0 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:44:34.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@54 -- # have_pci_nics=0 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:34.449 08:41:38 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:34.449 08:41:38 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:34.449 08:41:38 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:44:34.449 08:41:38 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:44:34.449 08:41:38 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:44:34.449 08:41:38 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.L7QSeE8AnE 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@507 -- # python - 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.L7QSeE8AnE 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.L7QSeE8AnE 00:44:34.449 08:41:38 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.L7QSeE8AnE 00:44:34.449 08:41:38 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@17 -- # name=key1 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.22hZZa2dBK 00:44:34.449 08:41:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:44:34.449 08:41:38 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:44:34.449 08:41:39 keyring_file -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:44:34.449 08:41:39 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:44:34.449 08:41:39 keyring_file -- nvmf/common.sh@507 -- # python - 00:44:34.449 08:41:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.22hZZa2dBK 00:44:34.449 08:41:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.22hZZa2dBK 00:44:34.449 08:41:39 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.22hZZa2dBK 00:44:34.449 08:41:39 keyring_file -- keyring/file.sh@30 -- # tgtpid=2370266 00:44:34.449 08:41:39 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2370266 00:44:34.449 08:41:39 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2370266 ']' 00:44:34.449 08:41:39 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:34.449 08:41:39 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:34.449 08:41:39 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:34.449 08:41:39 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:34.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:34.449 08:41:39 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:34.449 08:41:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:34.449 [2024-11-20 08:41:39.099493] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:44:34.449 [2024-11-20 08:41:39.099575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370266 ] 00:44:34.708 [2024-11-20 08:41:39.182810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:34.708 [2024-11-20 08:41:39.224332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:35.278 08:41:39 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:35.278 [2024-11-20 08:41:39.899645] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:35.278 null0 00:44:35.278 [2024-11-20 08:41:39.931696] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:35.278 [2024-11-20 08:41:39.932079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.278 08:41:39 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:35.278 [2024-11-20 08:41:39.963765] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:44:35.278 request: 00:44:35.278 { 00:44:35.278 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:44:35.278 "secure_channel": false, 00:44:35.278 "listen_address": { 00:44:35.278 "trtype": "tcp", 00:44:35.278 "traddr": "127.0.0.1", 00:44:35.278 "trsvcid": "4420" 00:44:35.278 }, 00:44:35.278 "method": "nvmf_subsystem_add_listener", 00:44:35.278 "req_id": 1 00:44:35.278 } 00:44:35.278 Got JSON-RPC error response 00:44:35.278 response: 00:44:35.278 { 00:44:35.278 "code": -32602, 00:44:35.278 "message": "Invalid parameters" 00:44:35.278 } 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:35.278 08:41:39 keyring_file -- keyring/file.sh@47 -- # bperfpid=2370366 00:44:35.278 08:41:39 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2370366 /var/tmp/bperf.sock 00:44:35.278 08:41:39 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2370366 ']' 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:35.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:35.278 08:41:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:35.538 [2024-11-20 08:41:40.022074] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:44:35.538 [2024-11-20 08:41:40.022125] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370366 ] 00:44:35.538 [2024-11-20 08:41:40.117527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:35.538 [2024-11-20 08:41:40.154498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:36.109 08:41:40 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:36.109 08:41:40 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:36.109 08:41:40 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.L7QSeE8AnE 00:44:36.109 08:41:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.L7QSeE8AnE 00:44:36.370 08:41:40 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.22hZZa2dBK 00:44:36.370 08:41:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.22hZZa2dBK 00:44:36.630 08:41:41 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:44:36.630 08:41:41 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:44:36.630 08:41:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:36.630 08:41:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:36.630 08:41:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:36.630 08:41:41 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.L7QSeE8AnE == \/\t\m\p\/\t\m\p\.\L\7\Q\S\e\E\8\A\n\E ]] 00:44:36.630 08:41:41 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:44:36.630 08:41:41 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:44:36.630 08:41:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:36.630 08:41:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:36.630 08:41:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:36.892 08:41:41 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.22hZZa2dBK == \/\t\m\p\/\t\m\p\.\2\2\h\Z\Z\a\2\d\B\K ]] 00:44:36.892 08:41:41 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:44:36.892 08:41:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:36.892 08:41:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:36.892 08:41:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:36.892 08:41:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:36.892 08:41:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:37.154 08:41:41 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:44:37.154 08:41:41 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:44:37.154 08:41:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:37.154 08:41:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:37.154 08:41:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:37.154 08:41:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:37.154 08:41:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:37.154 08:41:41 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:44:37.154 08:41:41 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:37.154 08:41:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:37.415 [2024-11-20 08:41:42.004808] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:37.415 nvme0n1 00:44:37.415 08:41:42 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:44:37.415 08:41:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:37.415 08:41:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:37.415 08:41:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:37.415 08:41:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:37.415 08:41:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:37.677 08:41:42 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:44:37.677 08:41:42 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:44:37.677 08:41:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:37.677 08:41:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:37.677 08:41:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:37.677 08:41:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:37.677 08:41:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:37.938 08:41:42 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:44:37.938 08:41:42 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:37.938 Running I/O for 1 seconds... 00:44:38.881 14814.00 IOPS, 57.87 MiB/s 00:44:38.881 Latency(us) 00:44:38.881 [2024-11-20T07:41:43.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:38.881 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:38.881 nvme0n1 : 1.01 14854.64 58.03 0.00 0.00 8597.27 3604.48 12834.13 00:44:38.881 [2024-11-20T07:41:43.610Z] =================================================================================================================== 00:44:38.881 [2024-11-20T07:41:43.610Z] Total : 14854.64 58.03 0.00 0.00 8597.27 3604.48 12834.13 00:44:38.881 { 00:44:38.881 "results": [ 00:44:38.881 { 00:44:38.881 "job": "nvme0n1", 00:44:38.881 "core_mask": "0x2", 00:44:38.881 "workload": "randrw", 00:44:38.881 "percentage": 50, 00:44:38.881 "status": "finished", 00:44:38.881 "queue_depth": 128, 00:44:38.881 "io_size": 4096, 00:44:38.881 "runtime": 1.005881, 00:44:38.881 "iops": 14854.639862965898, 00:44:38.881 "mibps": 58.02593696471054, 00:44:38.881 "io_failed": 0, 00:44:38.881 "io_timeout": 0, 00:44:38.881 "avg_latency_us": 8597.266971846697, 00:44:38.881 "min_latency_us": 3604.48, 00:44:38.881 "max_latency_us": 12834.133333333333 00:44:38.881 } 00:44:38.881 ], 00:44:38.881 "core_count": 1 00:44:38.881 } 00:44:38.881 08:41:43 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:38.881 08:41:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:39.143 08:41:43 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:39.143 08:41:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:39.143 08:41:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:39.143 08:41:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:39.143 08:41:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:39.143 08:41:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.404 08:41:43 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:39.404 08:41:43 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:39.404 08:41:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:39.404 08:41:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:39.404 08:41:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:39.404 08:41:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:39.404 08:41:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.404 08:41:44 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:39.404 08:41:44 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:39.404 08:41:44 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:39.404 08:41:44 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:39.404 08:41:44 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:39.404 08:41:44 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:39.404 08:41:44 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:39.404 08:41:44 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:39.404 08:41:44 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:39.404 08:41:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:39.665 [2024-11-20 08:41:44.255031] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:39.665 [2024-11-20 08:41:44.255763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238d9d0 (107): Transport endpoint is not connected 00:44:39.665 [2024-11-20 08:41:44.256758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238d9d0 (9): Bad file descriptor 00:44:39.665 [2024-11-20 08:41:44.257760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:39.665 [2024-11-20 08:41:44.257767] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:39.665 [2024-11-20 08:41:44.257773] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:39.665 [2024-11-20 08:41:44.257780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:39.665 request: 00:44:39.665 { 00:44:39.665 "name": "nvme0", 00:44:39.665 "trtype": "tcp", 00:44:39.665 "traddr": "127.0.0.1", 00:44:39.665 "adrfam": "ipv4", 00:44:39.665 "trsvcid": "4420", 00:44:39.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:39.665 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:39.665 "prchk_reftag": false, 00:44:39.665 "prchk_guard": false, 00:44:39.665 "hdgst": false, 00:44:39.665 "ddgst": false, 00:44:39.665 "psk": "key1", 00:44:39.665 "allow_unrecognized_csi": false, 00:44:39.665 "method": "bdev_nvme_attach_controller", 00:44:39.665 "req_id": 1 00:44:39.665 } 00:44:39.665 Got JSON-RPC error response 00:44:39.665 response: 00:44:39.665 { 00:44:39.665 "code": -5, 00:44:39.665 "message": "Input/output error" 00:44:39.665 } 00:44:39.665 08:41:44 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:39.665 08:41:44 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:39.665 08:41:44 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:39.665 08:41:44 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:39.665 08:41:44 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:39.665 08:41:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:39.665 08:41:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:39.665 08:41:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:39.665 08:41:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.665 08:41:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:39.927 08:41:44 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:39.927 08:41:44 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:39.927 08:41:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:39.927 08:41:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:39.927 08:41:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:39.927 08:41:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:39.927 08:41:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.927 08:41:44 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:39.927 08:41:44 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:39.927 08:41:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:40.187 08:41:44 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:40.187 08:41:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:40.449 08:41:44 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:40.449 08:41:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:40.449 08:41:44 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:40.449 08:41:45 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:40.449 08:41:45 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.L7QSeE8AnE 00:44:40.449 08:41:45 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.L7QSeE8AnE 00:44:40.449 08:41:45 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:40.449 08:41:45 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.L7QSeE8AnE 00:44:40.449 08:41:45 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:40.449 08:41:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:40.449 08:41:45 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:40.449 08:41:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:40.449 08:41:45 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.L7QSeE8AnE 00:44:40.449 08:41:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.L7QSeE8AnE 00:44:40.710 [2024-11-20 08:41:45.266376] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.L7QSeE8AnE': 0100660 00:44:40.710 [2024-11-20 08:41:45.266394] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:40.710 request: 00:44:40.710 { 00:44:40.710 "name": "key0", 00:44:40.710 "path": "/tmp/tmp.L7QSeE8AnE", 00:44:40.710 "method": "keyring_file_add_key", 00:44:40.710 "req_id": 1 00:44:40.710 } 00:44:40.710 Got JSON-RPC error response 00:44:40.710 response: 00:44:40.710 { 00:44:40.710 "code": -1, 00:44:40.710 "message": "Operation not permitted" 00:44:40.710 } 00:44:40.710 08:41:45 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:40.710 08:41:45 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:40.710 08:41:45 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:40.710 08:41:45 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:40.710 08:41:45 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.L7QSeE8AnE 00:44:40.710 08:41:45 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.L7QSeE8AnE 00:44:40.710 08:41:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.L7QSeE8AnE 00:44:40.972 08:41:45 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.L7QSeE8AnE 00:44:40.972 08:41:45 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:40.972 08:41:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:40.972 08:41:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:40.972 08:41:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:40.972 08:41:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:40.972 08:41:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:40.972 08:41:45 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:40.972 08:41:45 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:40.972 08:41:45 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:40.972 08:41:45 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:40.972 08:41:45 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:40.972 08:41:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:40.972 08:41:45 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:40.972 08:41:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:40.972 08:41:45 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:40.972 08:41:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:41.233 [2024-11-20 08:41:45.791711] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.L7QSeE8AnE': No such file or directory 00:44:41.233 [2024-11-20 08:41:45.791724] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:41.233 [2024-11-20 08:41:45.791738] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:41.233 [2024-11-20 08:41:45.791744] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:41.233 [2024-11-20 08:41:45.791750] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:41.233 [2024-11-20 08:41:45.791755] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:41.233 request: 00:44:41.233 { 00:44:41.233 "name": "nvme0", 00:44:41.233 "trtype": "tcp", 00:44:41.233 "traddr": "127.0.0.1", 00:44:41.233 "adrfam": "ipv4", 00:44:41.233 "trsvcid": "4420", 00:44:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:41.233 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:41.233 "prchk_reftag": false, 00:44:41.233 "prchk_guard": false, 00:44:41.233 "hdgst": false, 00:44:41.233 "ddgst": false, 00:44:41.233 "psk": "key0", 00:44:41.233 "allow_unrecognized_csi": false, 00:44:41.233 "method": "bdev_nvme_attach_controller", 00:44:41.233 "req_id": 1 00:44:41.233 } 00:44:41.233 Got JSON-RPC error response 00:44:41.233 response: 00:44:41.233 { 00:44:41.233 "code": -19, 00:44:41.233 "message": "No such device" 00:44:41.233 } 00:44:41.233 08:41:45 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:41.233 08:41:45 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:41.233 08:41:45 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:41.233 08:41:45 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:41.233 08:41:45 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:41.233 08:41:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:41.495 08:41:45 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:41.495 08:41:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:41.495 08:41:45 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:41.495 08:41:45 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:41.495 08:41:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:41.495 08:41:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:41.495 08:41:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HmGsxkhQ6y 00:44:41.495 08:41:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:41.495 08:41:45 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:41.495 08:41:45 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:44:41.495 08:41:45 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:44:41.495 08:41:45 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:44:41.495 08:41:45 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:44:41.495 08:41:45 keyring_file -- nvmf/common.sh@507 -- # python - 00:44:41.495 08:41:46 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HmGsxkhQ6y 00:44:41.495 08:41:46 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HmGsxkhQ6y 00:44:41.495 08:41:46 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.HmGsxkhQ6y 00:44:41.495 08:41:46 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HmGsxkhQ6y 00:44:41.495 08:41:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HmGsxkhQ6y 00:44:41.495 08:41:46 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:41.495 08:41:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:41.756 nvme0n1 00:44:41.756 08:41:46 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:41.756 08:41:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:41.756 08:41:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:41.756 08:41:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:41.756 08:41:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:41.756 08:41:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:42.017 08:41:46 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:42.017 08:41:46 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:42.017 08:41:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:42.277 08:41:46 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:42.277 08:41:46 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:42.277 08:41:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:42.277 08:41:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:42.277 08:41:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:42.277 08:41:46 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:42.277 08:41:46 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:42.277 08:41:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:42.277 08:41:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:42.277 08:41:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:42.277 08:41:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:42.277 08:41:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:42.538 08:41:47 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:42.538 08:41:47 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:42.538 08:41:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:42.800 08:41:47 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:42.800 08:41:47 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:42.800 08:41:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:42.800 08:41:47 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:42.800 08:41:47 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HmGsxkhQ6y 00:44:42.800 08:41:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HmGsxkhQ6y 00:44:43.062 08:41:47 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.22hZZa2dBK 00:44:43.062 08:41:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.22hZZa2dBK 00:44:43.322 08:41:47 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:43.322 08:41:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:43.582 nvme0n1 00:44:43.582 08:41:48 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:43.582 08:41:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:43.844 08:41:48 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:43.844 "subsystems": [ 00:44:43.844 { 00:44:43.844 "subsystem": "keyring", 00:44:43.844 "config": [ 00:44:43.844 { 00:44:43.844 "method": "keyring_file_add_key", 00:44:43.844 "params": { 00:44:43.844 "name": "key0", 00:44:43.844 "path": "/tmp/tmp.HmGsxkhQ6y" 00:44:43.844 } 00:44:43.844 }, 00:44:43.844 { 00:44:43.844 "method": "keyring_file_add_key", 00:44:43.844 "params": { 00:44:43.844 "name": "key1", 00:44:43.844 "path": "/tmp/tmp.22hZZa2dBK" 00:44:43.844 } 00:44:43.844 } 00:44:43.844 ] 00:44:43.844 }, 00:44:43.844 { 00:44:43.844 "subsystem": "iobuf", 00:44:43.844 "config": [ 00:44:43.844 { 00:44:43.844 "method": "iobuf_set_options", 00:44:43.844 "params": { 00:44:43.844 "small_pool_count": 8192, 00:44:43.844 "large_pool_count": 1024, 00:44:43.844 "small_bufsize": 8192, 00:44:43.844 "large_bufsize": 135168, 00:44:43.844 "enable_numa": false 00:44:43.844 } 00:44:43.844 } 00:44:43.844 ] 00:44:43.844 }, 00:44:43.844 { 00:44:43.844 "subsystem": "sock", 00:44:43.844 "config": [ 00:44:43.844 { 00:44:43.844 "method": "sock_set_default_impl", 00:44:43.844 "params": { 00:44:43.844 "impl_name": "posix" 00:44:43.844 } 00:44:43.844 }, 00:44:43.844 { 00:44:43.844 "method": "sock_impl_set_options", 00:44:43.844 "params": { 00:44:43.844 "impl_name": "ssl", 00:44:43.844 "recv_buf_size": 4096, 00:44:43.844 "send_buf_size": 4096, 00:44:43.844 "enable_recv_pipe": true, 00:44:43.844 "enable_quickack": false, 00:44:43.844 "enable_placement_id": 0, 00:44:43.844 "enable_zerocopy_send_server": true, 00:44:43.844 "enable_zerocopy_send_client": false, 00:44:43.844 "zerocopy_threshold": 0, 00:44:43.844 "tls_version": 0, 00:44:43.844 "enable_ktls": false 00:44:43.844 } 00:44:43.844 }, 00:44:43.844 { 00:44:43.844 "method": "sock_impl_set_options", 00:44:43.844 "params": { 00:44:43.844 "impl_name": "posix", 00:44:43.844 "recv_buf_size": 2097152, 00:44:43.844 "send_buf_size": 2097152, 00:44:43.844 "enable_recv_pipe": true, 00:44:43.844 "enable_quickack": false, 00:44:43.844 "enable_placement_id": 0, 00:44:43.844 "enable_zerocopy_send_server": true, 00:44:43.844 "enable_zerocopy_send_client": false, 00:44:43.844 "zerocopy_threshold": 0, 00:44:43.844 "tls_version": 0, 00:44:43.844 "enable_ktls": false 00:44:43.844 } 00:44:43.844 } 00:44:43.844 ] 00:44:43.844 }, 00:44:43.844 { 00:44:43.844 "subsystem": "vmd", 00:44:43.844 "config": [] 00:44:43.844 }, 00:44:43.844 { 00:44:43.844 "subsystem": "accel", 00:44:43.844 "config": [ 00:44:43.844 { 00:44:43.844 "method": "accel_set_options", 00:44:43.844 "params": { 00:44:43.844 "small_cache_size": 128, 00:44:43.844 "large_cache_size": 16, 00:44:43.844 "task_count": 2048, 00:44:43.844 "sequence_count": 2048, 00:44:43.844 "buf_count": 2048 00:44:43.844 } 00:44:43.844 } 00:44:43.844 ] 00:44:43.844 }, 00:44:43.844 { 00:44:43.844 "subsystem": "bdev", 00:44:43.844 "config": [ 00:44:43.844 { 00:44:43.844 "method": "bdev_set_options", 00:44:43.844 "params": { 00:44:43.844 "bdev_io_pool_size": 65535, 00:44:43.844 "bdev_io_cache_size": 256, 00:44:43.844 "bdev_auto_examine": true, 00:44:43.844 "iobuf_small_cache_size": 128, 00:44:43.844 "iobuf_large_cache_size": 16 00:44:43.844 } 00:44:43.844 }, 00:44:43.844 { 00:44:43.844 "method": "bdev_raid_set_options", 00:44:43.844 "params": { 00:44:43.844 "process_window_size_kb": 1024, 00:44:43.844 "process_max_bandwidth_mb_sec": 0 00:44:43.844 } 00:44:43.844 }, 00:44:43.844 { 00:44:43.844 "method": "bdev_iscsi_set_options", 00:44:43.844 "params": { 00:44:43.844 "timeout_sec": 30 00:44:43.844 } 00:44:43.844 }, 00:44:43.844 { 00:44:43.844 "method": "bdev_nvme_set_options", 00:44:43.844 "params": { 00:44:43.844 "action_on_timeout": "none", 00:44:43.844 "timeout_us": 0, 00:44:43.844 "timeout_admin_us": 0, 00:44:43.844 "keep_alive_timeout_ms": 10000, 00:44:43.844 "arbitration_burst": 0, 00:44:43.844 "low_priority_weight": 0, 00:44:43.844 "medium_priority_weight": 0, 00:44:43.844 "high_priority_weight": 0, 00:44:43.844 "nvme_adminq_poll_period_us": 10000, 00:44:43.844 "nvme_ioq_poll_period_us": 0, 00:44:43.844 "io_queue_requests": 512, 00:44:43.844 "delay_cmd_submit": true, 00:44:43.844 "transport_retry_count": 4, 00:44:43.844 "bdev_retry_count": 3, 00:44:43.844 "transport_ack_timeout": 0, 00:44:43.844 "ctrlr_loss_timeout_sec": 0, 00:44:43.844 "reconnect_delay_sec": 0, 00:44:43.844 "fast_io_fail_timeout_sec": 0, 00:44:43.844 "disable_auto_failback": false, 00:44:43.844 "generate_uuids": false, 00:44:43.844 "transport_tos": 0, 00:44:43.845 "nvme_error_stat": false, 00:44:43.845 "rdma_srq_size": 0, 00:44:43.845 "io_path_stat": false, 00:44:43.845 "allow_accel_sequence": false, 00:44:43.845 "rdma_max_cq_size": 0, 00:44:43.845 "rdma_cm_event_timeout_ms": 0, 00:44:43.845 "dhchap_digests": [ 00:44:43.845 "sha256", 00:44:43.845 "sha384", 00:44:43.845 "sha512" 00:44:43.845 ], 00:44:43.845 "dhchap_dhgroups": [ 00:44:43.845 "null", 00:44:43.845 "ffdhe2048", 00:44:43.845 "ffdhe3072", 00:44:43.845 "ffdhe4096", 00:44:43.845 "ffdhe6144", 00:44:43.845 "ffdhe8192" 00:44:43.845 ] 00:44:43.845 } 00:44:43.845 }, 00:44:43.845 { 00:44:43.845 "method": "bdev_nvme_attach_controller", 00:44:43.845 "params": { 00:44:43.845 "name": "nvme0", 00:44:43.845 "trtype": "TCP", 00:44:43.845 "adrfam": "IPv4", 00:44:43.845 "traddr": "127.0.0.1", 00:44:43.845 "trsvcid": "4420", 00:44:43.845 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:43.845 "prchk_reftag": false, 00:44:43.845 "prchk_guard": false, 00:44:43.845 "ctrlr_loss_timeout_sec": 0, 00:44:43.845 "reconnect_delay_sec": 0, 00:44:43.845 "fast_io_fail_timeout_sec": 0, 00:44:43.845 "psk": "key0", 00:44:43.845 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:43.845 "hdgst": false, 00:44:43.845 "ddgst": false, 00:44:43.845 "multipath": "multipath" 00:44:43.845 } 00:44:43.845 }, 00:44:43.845 { 00:44:43.845 "method": "bdev_nvme_set_hotplug", 00:44:43.845 "params": { 00:44:43.845 "period_us": 100000, 00:44:43.845 "enable": false 00:44:43.845 } 00:44:43.845 }, 00:44:43.845 { 00:44:43.845 "method": "bdev_wait_for_examine" 00:44:43.845 } 00:44:43.845 ] 00:44:43.845 }, 00:44:43.845 { 00:44:43.845 "subsystem": "nbd", 00:44:43.845 "config": [] 00:44:43.845 } 00:44:43.845 ] 00:44:43.845 }' 00:44:43.845 08:41:48 keyring_file -- keyring/file.sh@115 -- # killprocess 2370366 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2370366 ']' 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2370366 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2370366 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2370366' 00:44:43.845 killing process with pid 2370366 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@973 -- # kill 2370366 00:44:43.845 Received shutdown signal, test time was about 1.000000 seconds 00:44:43.845 00:44:43.845 Latency(us) 00:44:43.845 [2024-11-20T07:41:48.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:43.845 [2024-11-20T07:41:48.574Z] =================================================================================================================== 00:44:43.845 [2024-11-20T07:41:48.574Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@978 -- # wait 2370366 00:44:43.845 08:41:48 keyring_file -- keyring/file.sh@118 -- # bperfpid=2372093 00:44:43.845 08:41:48 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2372093 /var/tmp/bperf.sock 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2372093 ']' 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:43.845 08:41:48 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:43.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:43.845 08:41:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:43.845 08:41:48 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:43.845 "subsystems": [ 00:44:43.845 { 00:44:43.845 "subsystem": "keyring", 00:44:43.845 "config": [ 00:44:43.845 { 00:44:43.845 "method": "keyring_file_add_key", 00:44:43.845 "params": { 00:44:43.845 "name": "key0", 00:44:43.845 "path": "/tmp/tmp.HmGsxkhQ6y" 00:44:43.845 } 00:44:43.845 }, 00:44:43.845 { 00:44:43.845 "method": "keyring_file_add_key", 00:44:43.845 "params": { 00:44:43.845 "name": "key1", 00:44:43.845 "path": "/tmp/tmp.22hZZa2dBK" 00:44:43.845 } 00:44:43.845 } 00:44:43.845 ] 00:44:43.845 }, 00:44:43.845 { 00:44:43.845 "subsystem": "iobuf", 00:44:43.845 "config": [ 00:44:43.845 { 00:44:43.845 "method": "iobuf_set_options", 00:44:43.845 "params": { 00:44:43.845 "small_pool_count": 8192, 00:44:43.845 "large_pool_count": 1024, 00:44:43.845 "small_bufsize": 8192, 00:44:43.845 "large_bufsize": 135168, 00:44:43.845 "enable_numa": false 00:44:43.845 } 00:44:43.845 } 00:44:43.845 ] 00:44:43.845 }, 00:44:43.845 { 00:44:43.845 "subsystem": "sock", 00:44:43.845 "config": [ 00:44:43.845 { 00:44:43.845 "method": "sock_set_default_impl", 00:44:43.845 "params": { 00:44:43.845 "impl_name": "posix" 00:44:43.845 } 00:44:43.845 }, 00:44:43.845 { 00:44:43.845 "method": "sock_impl_set_options", 00:44:43.845 "params": { 00:44:43.845 "impl_name": "ssl", 00:44:43.845 "recv_buf_size": 4096, 00:44:43.845 "send_buf_size": 4096, 00:44:43.845 "enable_recv_pipe": true, 00:44:43.845 "enable_quickack": false, 00:44:43.845 "enable_placement_id": 0, 00:44:43.845 "enable_zerocopy_send_server": true, 00:44:43.845 "enable_zerocopy_send_client": false, 00:44:43.845 "zerocopy_threshold": 0, 00:44:43.845 "tls_version": 0, 00:44:43.845 "enable_ktls": false 00:44:43.845 } 00:44:43.845 }, 00:44:43.845 { 00:44:43.845 "method": "sock_impl_set_options", 00:44:43.845 "params": { 00:44:43.845 "impl_name": "posix", 00:44:43.845 "recv_buf_size": 2097152, 00:44:43.845 "send_buf_size": 2097152, 00:44:43.845 "enable_recv_pipe": true, 00:44:43.845 "enable_quickack": false, 00:44:43.845 "enable_placement_id": 0, 00:44:43.845 "enable_zerocopy_send_server": true, 00:44:43.845 "enable_zerocopy_send_client": false, 00:44:43.845 "zerocopy_threshold": 0, 00:44:43.845 "tls_version": 0, 00:44:43.845 "enable_ktls": false 00:44:43.845 } 00:44:43.845 } 00:44:43.845 ] 00:44:43.845 }, 00:44:43.845 { 00:44:43.845 "subsystem": "vmd", 00:44:43.845 "config": [] 00:44:43.845 }, 00:44:43.845 { 00:44:43.845 "subsystem": "accel", 00:44:43.845 "config": [ 00:44:43.845 { 00:44:43.845 "method": "accel_set_options", 00:44:43.845 "params": { 00:44:43.845 "small_cache_size": 128, 00:44:43.845 "large_cache_size": 16, 00:44:43.845 "task_count": 2048, 00:44:43.845 "sequence_count": 2048, 00:44:43.845 "buf_count": 2048 00:44:43.845 } 00:44:43.845 } 00:44:43.845 ] 00:44:43.845 }, 00:44:43.845 { 00:44:43.845 "subsystem": "bdev", 00:44:43.845 "config": [ 00:44:43.845 { 00:44:43.845 "method": "bdev_set_options", 00:44:43.845 "params": { 00:44:43.845 "bdev_io_pool_size": 65535, 00:44:43.845 "bdev_io_cache_size": 256, 00:44:43.845 "bdev_auto_examine": true, 00:44:43.845 "iobuf_small_cache_size": 128, 00:44:43.846 "iobuf_large_cache_size": 16 00:44:43.846 } 00:44:43.846 }, 00:44:43.846 { 00:44:43.846 "method": "bdev_raid_set_options", 00:44:43.846 "params": { 00:44:43.846 "process_window_size_kb": 1024, 00:44:43.846 "process_max_bandwidth_mb_sec": 0 00:44:43.846 } 00:44:43.846 }, 00:44:43.846 { 00:44:43.846 "method": "bdev_iscsi_set_options", 00:44:43.846 "params": { 00:44:43.846 "timeout_sec": 30 00:44:43.846 } 00:44:43.846 }, 00:44:43.846 { 00:44:43.846 "method": "bdev_nvme_set_options", 00:44:43.846 "params": { 00:44:43.846 "action_on_timeout": "none", 00:44:43.846 "timeout_us": 0, 00:44:43.846 "timeout_admin_us": 0, 00:44:43.846 "keep_alive_timeout_ms": 10000, 00:44:43.846 "arbitration_burst": 0, 00:44:43.846 "low_priority_weight": 0, 00:44:43.846 "medium_priority_weight": 0, 00:44:43.846 "high_priority_weight": 0, 00:44:43.846 "nvme_adminq_poll_period_us": 10000, 00:44:43.846 "nvme_ioq_poll_period_us": 0, 00:44:43.846 "io_queue_requests": 512, 00:44:43.846 "delay_cmd_submit": true, 00:44:43.846 "transport_retry_count": 4, 00:44:43.846 "bdev_retry_count": 3, 00:44:43.846 "transport_ack_timeout": 0, 00:44:43.846 "ctrlr_loss_timeout_sec": 0, 00:44:43.846 "reconnect_delay_sec": 0, 00:44:43.846 "fast_io_fail_timeout_sec": 0, 00:44:43.846 "disable_auto_failback": false, 00:44:43.846 "generate_uuids": false, 00:44:43.846 "transport_tos": 0, 00:44:43.846 "nvme_error_stat": false, 00:44:43.846 "rdma_srq_size": 0, 00:44:43.846 "io_path_stat": false, 00:44:43.846 "allow_accel_sequence": false, 00:44:43.846 "rdma_max_cq_size": 0, 00:44:43.846 "rdma_cm_event_timeout_ms": 0, 00:44:43.846 "dhchap_digests": [ 00:44:43.846 "sha256", 00:44:43.846 "sha384", 00:44:43.846 "sha512" 00:44:43.846 ], 00:44:43.846 "dhchap_dhgroups": [ 00:44:43.846 "null", 00:44:43.846 "ffdhe2048", 00:44:43.846 "ffdhe3072", 00:44:43.846 "ffdhe4096", 00:44:43.846 "ffdhe6144", 00:44:43.846 "ffdhe8192" 00:44:43.846 ] 00:44:43.846 } 00:44:43.846 }, 00:44:43.846 { 00:44:43.846 "method": "bdev_nvme_attach_controller", 00:44:43.846 "params": { 00:44:43.846 "name": "nvme0", 00:44:43.846 "trtype": "TCP", 00:44:43.846 "adrfam": "IPv4", 00:44:43.846 "traddr": "127.0.0.1", 00:44:43.846 "trsvcid": "4420", 00:44:43.846 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:43.846 "prchk_reftag": false, 00:44:43.846 "prchk_guard": false, 00:44:43.846 "ctrlr_loss_timeout_sec": 0, 00:44:43.846 "reconnect_delay_sec": 0, 00:44:43.846 "fast_io_fail_timeout_sec": 0, 00:44:43.846 "psk": "key0", 00:44:43.846 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:43.846 "hdgst": false, 00:44:43.846 "ddgst": false, 00:44:43.846 "multipath": "multipath" 00:44:43.846 } 00:44:43.846 }, 00:44:43.846 { 00:44:43.846 "method": "bdev_nvme_set_hotplug", 00:44:43.846 "params": { 00:44:43.846 "period_us": 100000, 00:44:43.846 "enable": false 00:44:43.846 } 00:44:43.846 }, 00:44:43.846 { 00:44:43.846 "method": "bdev_wait_for_examine" 00:44:43.846 } 00:44:43.846 ] 00:44:43.846 }, 00:44:43.846 { 00:44:43.846 "subsystem": "nbd", 00:44:43.846 "config": [] 00:44:43.846 } 00:44:43.846 ] 00:44:43.846 }' 00:44:43.846 [2024-11-20 08:41:48.544939] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:44:43.846 [2024-11-20 08:41:48.544993] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2372093 ] 00:44:44.108 [2024-11-20 08:41:48.635274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:44.108 [2024-11-20 08:41:48.664042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:44.108 [2024-11-20 08:41:48.807101] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:44.680 08:41:49 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:44.680 08:41:49 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:44.680 08:41:49 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:44.680 08:41:49 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:44.680 08:41:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:44.941 08:41:49 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:44.941 08:41:49 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:44.941 08:41:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:44.941 08:41:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:44.941 08:41:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:44.941 08:41:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:44.941 08:41:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:45.201 08:41:49 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:45.201 08:41:49 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:45.201 08:41:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:45.201 08:41:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:45.201 08:41:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:45.201 08:41:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:45.201 08:41:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:45.201 08:41:49 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:45.201 08:41:49 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:45.201 08:41:49 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:45.201 08:41:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:45.462 08:41:50 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:45.462 08:41:50 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:45.462 08:41:50 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.HmGsxkhQ6y /tmp/tmp.22hZZa2dBK 00:44:45.462 08:41:50 keyring_file -- keyring/file.sh@20 -- # killprocess 2372093 00:44:45.462 08:41:50 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2372093 ']' 00:44:45.462 08:41:50 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2372093 00:44:45.462 08:41:50 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:45.462 08:41:50 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:45.462 08:41:50 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2372093 00:44:45.462 08:41:50 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:45.462 08:41:50 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:45.462 08:41:50 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2372093' 00:44:45.462 killing process with pid 2372093 00:44:45.462 08:41:50 keyring_file -- common/autotest_common.sh@973 -- # kill 2372093 00:44:45.462 Received shutdown signal, test time was about 1.000000 seconds 00:44:45.462 00:44:45.462 Latency(us) 00:44:45.462 [2024-11-20T07:41:50.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:45.462 [2024-11-20T07:41:50.191Z] =================================================================================================================== 00:44:45.462 [2024-11-20T07:41:50.191Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:45.462 08:41:50 keyring_file -- common/autotest_common.sh@978 -- # wait 2372093 00:44:45.723 08:41:50 keyring_file -- keyring/file.sh@21 -- # killprocess 2370266 00:44:45.723 08:41:50 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2370266 ']' 00:44:45.723 08:41:50 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2370266 00:44:45.723 08:41:50 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:45.723 08:41:50 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:45.723 08:41:50 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2370266 00:44:45.723 08:41:50 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:45.723 08:41:50 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:45.723 08:41:50 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2370266' 00:44:45.723 killing process with pid 2370266 00:44:45.723 08:41:50 keyring_file -- common/autotest_common.sh@973 -- # kill 2370266 00:44:45.723 08:41:50 keyring_file -- common/autotest_common.sh@978 -- # wait 2370266 00:44:45.984 00:44:45.984 real 0m11.750s 00:44:45.984 user 0m28.421s 00:44:45.984 sys 0m2.584s 00:44:45.984 08:41:50 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:45.984 08:41:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:45.984 ************************************ 00:44:45.984 END TEST keyring_file 00:44:45.984 ************************************ 00:44:45.984 08:41:50 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:44:45.984 08:41:50 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:45.984 08:41:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:45.984 08:41:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:45.984 08:41:50 -- common/autotest_common.sh@10 -- # set +x 00:44:45.984 ************************************ 00:44:45.984 START TEST keyring_linux 00:44:45.984 ************************************ 00:44:45.984 08:41:50 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:45.984 Joined session keyring: 750575191 00:44:45.984 * Looking for test storage... 00:44:45.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:45.984 08:41:50 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:44:45.984 08:41:50 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:44:45.984 08:41:50 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:44:45.984 08:41:50 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:45.984 08:41:50 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:46.246 08:41:50 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:46.246 08:41:50 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:44:46.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:46.246 --rc genhtml_branch_coverage=1 00:44:46.246 --rc genhtml_function_coverage=1 00:44:46.246 --rc genhtml_legend=1 00:44:46.246 --rc geninfo_all_blocks=1 00:44:46.246 --rc geninfo_unexecuted_blocks=1 00:44:46.246 00:44:46.246 ' 00:44:46.246 08:41:50 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:44:46.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:46.246 --rc genhtml_branch_coverage=1 00:44:46.246 --rc genhtml_function_coverage=1 00:44:46.246 --rc genhtml_legend=1 00:44:46.246 --rc geninfo_all_blocks=1 00:44:46.246 --rc geninfo_unexecuted_blocks=1 00:44:46.246 00:44:46.246 ' 00:44:46.246 08:41:50 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:44:46.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:46.246 --rc genhtml_branch_coverage=1 00:44:46.246 --rc genhtml_function_coverage=1 00:44:46.246 --rc genhtml_legend=1 00:44:46.246 --rc geninfo_all_blocks=1 00:44:46.246 --rc geninfo_unexecuted_blocks=1 00:44:46.246 00:44:46.246 ' 00:44:46.246 08:41:50 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:44:46.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:46.246 --rc genhtml_branch_coverage=1 00:44:46.246 --rc genhtml_function_coverage=1 00:44:46.246 --rc genhtml_legend=1 00:44:46.246 --rc geninfo_all_blocks=1 00:44:46.246 --rc geninfo_unexecuted_blocks=1 00:44:46.246 00:44:46.246 ' 00:44:46.246 08:41:50 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:46.246 08:41:50 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@16 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:46.246 08:41:50 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:46.246 08:41:50 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:46.246 08:41:50 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:46.246 08:41:50 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:46.246 08:41:50 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:46.246 08:41:50 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:46.246 08:41:50 keyring_linux -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:44:46.246 08:41:50 keyring_linux -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:44:46.247 08:41:50 keyring_linux -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:44:46.247 08:41:50 keyring_linux -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@50 -- # : 0 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:44:46.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@54 -- # have_pci_nics=0 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:46.247 08:41:50 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:46.247 08:41:50 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:46.247 08:41:50 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:46.247 08:41:50 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:46.247 08:41:50 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:46.247 08:41:50 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@507 -- # python - 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:46.247 /tmp/:spdk-test:key0 00:44:46.247 08:41:50 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:44:46.247 08:41:50 keyring_linux -- nvmf/common.sh@507 -- # python - 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:46.247 08:41:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:46.247 /tmp/:spdk-test:key1 00:44:46.247 08:41:50 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2372587 00:44:46.247 08:41:50 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2372587 00:44:46.247 08:41:50 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2372587 ']' 00:44:46.247 08:41:50 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:46.247 08:41:50 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:46.247 08:41:50 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:46.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:46.247 08:41:50 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:46.247 08:41:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:46.247 08:41:50 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:46.247 [2024-11-20 08:41:50.882531] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:44:46.247 [2024-11-20 08:41:50.882594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2372587 ] 00:44:46.247 [2024-11-20 08:41:50.961249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:46.508 [2024-11-20 08:41:50.999473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:47.080 08:41:51 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:47.080 08:41:51 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:47.080 08:41:51 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:47.080 08:41:51 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:47.080 08:41:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:47.080 [2024-11-20 08:41:51.649676] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:47.080 null0 00:44:47.080 [2024-11-20 08:41:51.681729] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:47.080 [2024-11-20 08:41:51.682134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:47.080 08:41:51 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:47.080 08:41:51 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:47.080 4123214 00:44:47.080 08:41:51 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:47.080 95263856 00:44:47.080 08:41:51 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2372862 00:44:47.080 08:41:51 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2372862 /var/tmp/bperf.sock 00:44:47.080 08:41:51 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:47.080 08:41:51 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2372862 ']' 00:44:47.080 08:41:51 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:47.080 08:41:51 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:47.080 08:41:51 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:47.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:47.080 08:41:51 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:47.080 08:41:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:47.080 [2024-11-20 08:41:51.759090] Starting SPDK v25.01-pre git sha1 c788bae60 / DPDK 24.03.0 initialization... 00:44:47.080 [2024-11-20 08:41:51.759132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2372862 ] 00:44:47.341 [2024-11-20 08:41:51.813577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:47.341 [2024-11-20 08:41:51.843323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:47.341 08:41:51 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:47.341 08:41:51 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:47.341 08:41:51 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:47.341 08:41:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:47.341 08:41:52 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:47.341 08:41:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:47.601 08:41:52 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:47.601 08:41:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:47.862 [2024-11-20 08:41:52.410000] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:47.862 nvme0n1 00:44:47.862 08:41:52 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:47.862 08:41:52 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:47.862 08:41:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:47.862 08:41:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:47.862 08:41:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:47.862 08:41:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:48.123 08:41:52 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:48.123 08:41:52 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:48.123 08:41:52 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:48.123 08:41:52 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:48.123 08:41:52 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:48.123 08:41:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:48.123 08:41:52 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:48.123 08:41:52 keyring_linux -- keyring/linux.sh@25 -- # sn=4123214 00:44:48.123 08:41:52 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:48.123 08:41:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:48.123 08:41:52 keyring_linux -- keyring/linux.sh@26 -- # [[ 4123214 == \4\1\2\3\2\1\4 ]] 00:44:48.123 08:41:52 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 4123214 00:44:48.123 08:41:52 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:48.123 08:41:52 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:48.384 Running I/O for 1 seconds... 00:44:49.326 16177.00 IOPS, 63.19 MiB/s 00:44:49.326 Latency(us) 00:44:49.326 [2024-11-20T07:41:54.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:49.326 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:49.326 nvme0n1 : 1.01 16179.01 63.20 0.00 0.00 7878.18 6908.59 15073.28 00:44:49.326 [2024-11-20T07:41:54.055Z] =================================================================================================================== 00:44:49.326 [2024-11-20T07:41:54.055Z] Total : 16179.01 63.20 0.00 0.00 7878.18 6908.59 15073.28 00:44:49.326 { 00:44:49.326 "results": [ 00:44:49.326 { 00:44:49.326 "job": "nvme0n1", 00:44:49.326 "core_mask": "0x2", 00:44:49.326 "workload": "randread", 00:44:49.326 "status": "finished", 00:44:49.326 "queue_depth": 128, 00:44:49.326 "io_size": 4096, 00:44:49.326 "runtime": 1.007787, 00:44:49.326 "iops": 16179.014017843056, 00:44:49.326 "mibps": 63.199273507199436, 00:44:49.326 "io_failed": 0, 00:44:49.326 "io_timeout": 0, 00:44:49.326 "avg_latency_us": 7878.178489215987, 00:44:49.326 "min_latency_us": 6908.586666666667, 00:44:49.326 "max_latency_us": 15073.28 00:44:49.326 } 00:44:49.326 ], 00:44:49.326 "core_count": 1 00:44:49.326 } 00:44:49.326 08:41:53 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:49.326 08:41:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:49.587 08:41:54 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:49.587 08:41:54 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:49.587 08:41:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:49.587 08:41:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:49.587 08:41:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:49.587 08:41:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:49.849 08:41:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:49.849 [2024-11-20 08:41:54.470146] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:49.849 [2024-11-20 08:41:54.470944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4e270 (107): Transport endpoint is not connected 00:44:49.849 [2024-11-20 08:41:54.471941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4e270 (9): Bad file descriptor 00:44:49.849 [2024-11-20 08:41:54.472942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:49.849 [2024-11-20 08:41:54.472950] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:49.849 [2024-11-20 08:41:54.472956] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:49.849 [2024-11-20 08:41:54.472963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:49.849 request: 00:44:49.849 { 00:44:49.849 "name": "nvme0", 00:44:49.849 "trtype": "tcp", 00:44:49.849 "traddr": "127.0.0.1", 00:44:49.849 "adrfam": "ipv4", 00:44:49.849 "trsvcid": "4420", 00:44:49.849 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:49.849 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:49.849 "prchk_reftag": false, 00:44:49.849 "prchk_guard": false, 00:44:49.849 "hdgst": false, 00:44:49.849 "ddgst": false, 00:44:49.849 "psk": ":spdk-test:key1", 00:44:49.849 "allow_unrecognized_csi": false, 00:44:49.849 "method": "bdev_nvme_attach_controller", 00:44:49.849 "req_id": 1 00:44:49.849 } 00:44:49.849 Got JSON-RPC error response 00:44:49.849 response: 00:44:49.849 { 00:44:49.849 "code": -5, 00:44:49.849 "message": "Input/output error" 00:44:49.849 } 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@33 -- # sn=4123214 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 4123214 00:44:49.849 1 links removed 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@33 -- # sn=95263856 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 95263856 00:44:49.849 1 links removed 00:44:49.849 08:41:54 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2372862 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2372862 ']' 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2372862 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:49.849 08:41:54 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2372862 00:44:50.111 08:41:54 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:50.111 08:41:54 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:50.111 08:41:54 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2372862' 00:44:50.111 killing process with pid 2372862 00:44:50.111 08:41:54 keyring_linux -- common/autotest_common.sh@973 -- # kill 2372862 00:44:50.111 Received shutdown signal, test time was about 1.000000 seconds 00:44:50.111 00:44:50.111 Latency(us) 00:44:50.111 [2024-11-20T07:41:54.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:50.111 [2024-11-20T07:41:54.840Z] =================================================================================================================== 00:44:50.111 [2024-11-20T07:41:54.840Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:50.111 08:41:54 keyring_linux -- common/autotest_common.sh@978 -- # wait 2372862 00:44:50.111 08:41:54 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2372587 00:44:50.111 08:41:54 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2372587 ']' 00:44:50.111 08:41:54 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2372587 00:44:50.111 08:41:54 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:50.111 08:41:54 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:50.111 08:41:54 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2372587 00:44:50.111 08:41:54 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:50.111 08:41:54 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:50.111 08:41:54 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2372587' 00:44:50.111 killing process with pid 2372587 00:44:50.111 08:41:54 keyring_linux -- common/autotest_common.sh@973 -- # kill 2372587 00:44:50.111 08:41:54 keyring_linux -- common/autotest_common.sh@978 -- # wait 2372587 00:44:50.372 00:44:50.372 real 0m4.396s 00:44:50.372 user 0m7.994s 00:44:50.372 sys 0m1.303s 00:44:50.372 08:41:54 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:50.372 08:41:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:50.372 ************************************ 00:44:50.372 END TEST keyring_linux 00:44:50.372 ************************************ 00:44:50.372 08:41:54 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:50.372 08:41:54 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:50.372 08:41:54 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:50.372 08:41:54 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:50.372 08:41:54 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:50.372 08:41:54 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:50.372 08:41:54 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:50.372 08:41:54 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:50.372 08:41:54 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:50.372 08:41:54 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:50.372 08:41:54 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:50.372 08:41:54 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:50.372 08:41:54 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:50.372 08:41:54 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:50.372 08:41:54 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:50.372 08:41:54 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:50.372 08:41:54 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:50.372 08:41:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:50.372 08:41:54 -- common/autotest_common.sh@10 -- # set +x 00:44:50.372 08:41:54 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:50.372 08:41:54 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:50.372 08:41:54 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:50.372 08:41:54 -- common/autotest_common.sh@10 -- # set +x 00:44:58.533 INFO: APP EXITING 00:44:58.533 INFO: killing all VMs 00:44:58.533 INFO: killing vhost app 00:44:58.533 INFO: EXIT DONE 00:45:01.838 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:45:01.838 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:45:01.838 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:45:01.838 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:45:01.838 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:45:01.838 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:45:01.838 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:45:01.838 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:45:01.838 0000:65:00.0 (144d a80a): Already using the nvme driver 00:45:01.838 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:45:01.838 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:45:01.838 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:45:01.838 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:45:01.838 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:45:01.838 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:45:01.838 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:45:01.838 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:45:06.060 Cleaning 00:45:06.060 Removing: /var/run/dpdk/spdk0/config 00:45:06.060 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:06.060 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:06.060 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:06.060 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:06.060 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:06.060 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:06.060 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:06.060 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:06.060 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:06.060 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:06.060 Removing: /var/run/dpdk/spdk1/config 00:45:06.060 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:06.060 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:06.060 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:06.060 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:06.060 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:06.060 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:06.060 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:06.060 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:06.060 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:06.060 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:06.060 Removing: /var/run/dpdk/spdk2/config 00:45:06.060 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:06.060 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:06.060 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:06.060 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:06.060 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:06.060 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:06.060 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:06.060 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:06.060 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:06.060 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:06.060 Removing: /var/run/dpdk/spdk3/config 00:45:06.060 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:06.060 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:06.060 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:06.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:06.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:06.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:06.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:06.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:06.321 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:06.321 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:06.321 Removing: /var/run/dpdk/spdk4/config 00:45:06.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:06.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:06.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:06.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:06.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:06.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:06.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:06.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:06.321 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:06.321 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:06.321 Removing: /dev/shm/bdev_svc_trace.1 00:45:06.321 Removing: /dev/shm/nvmf_trace.0 00:45:06.321 Removing: /dev/shm/spdk_tgt_trace.pid1748828 00:45:06.321 Removing: /var/run/dpdk/spdk0 00:45:06.321 Removing: /var/run/dpdk/spdk1 00:45:06.321 Removing: /var/run/dpdk/spdk2 00:45:06.321 Removing: /var/run/dpdk/spdk3 00:45:06.321 Removing: /var/run/dpdk/spdk4 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1747141 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1748828 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1749463 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1750680 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1750959 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1752546 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1752805 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1753154 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1754147 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1754869 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1755264 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1755663 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1756076 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1756479 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1756651 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1756867 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1757252 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1758312 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1761604 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1761969 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1762319 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1762606 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1763025 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1763032 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1763501 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1763739 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1764099 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1764135 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1764478 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1764611 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1765257 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1765412 00:45:06.321 Removing: /var/run/dpdk/spdk_pid1765715 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1770948 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1776822 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1789103 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1789909 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1795876 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1796236 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1802120 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1810097 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1813349 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1827092 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1850568 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1860786 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1863245 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1864429 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1871047 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1931172 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1938100 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1945837 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1954164 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1954196 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1955206 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1956243 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1957301 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1957931 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1958087 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1958318 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1958520 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1958525 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1959531 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1960531 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1961545 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1962211 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1962220 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1962554 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1964007 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1965506 00:45:06.583 Removing: /var/run/dpdk/spdk_pid1976521 00:45:06.583 Removing: /var/run/dpdk/spdk_pid2013169 00:45:06.583 Removing: /var/run/dpdk/spdk_pid2018977 00:45:06.583 Removing: /var/run/dpdk/spdk_pid2020972 00:45:06.583 Removing: /var/run/dpdk/spdk_pid2023091 00:45:06.583 Removing: /var/run/dpdk/spdk_pid2023320 00:45:06.583 Removing: /var/run/dpdk/spdk_pid2023354 00:45:06.583 Removing: /var/run/dpdk/spdk_pid2023669 00:45:06.583 Removing: /var/run/dpdk/spdk_pid2024226 00:45:06.583 Removing: /var/run/dpdk/spdk_pid2026404 00:45:06.583 Removing: /var/run/dpdk/spdk_pid2027487 00:45:06.584 Removing: /var/run/dpdk/spdk_pid2027967 00:45:06.584 Removing: /var/run/dpdk/spdk_pid2030584 00:45:06.584 Removing: /var/run/dpdk/spdk_pid2031288 00:45:06.584 Removing: /var/run/dpdk/spdk_pid2032172 00:45:06.584 Removing: /var/run/dpdk/spdk_pid2037771 00:45:06.584 Removing: /var/run/dpdk/spdk_pid2044871 00:45:06.584 Removing: /var/run/dpdk/spdk_pid2044873 00:45:06.584 Removing: /var/run/dpdk/spdk_pid2044874 00:45:06.584 Removing: /var/run/dpdk/spdk_pid2050264 00:45:06.584 Removing: /var/run/dpdk/spdk_pid2062130 00:45:06.584 Removing: /var/run/dpdk/spdk_pid2066975 00:45:06.584 Removing: /var/run/dpdk/spdk_pid2074900 00:45:06.584 Removing: /var/run/dpdk/spdk_pid2076421 00:45:06.584 Removing: /var/run/dpdk/spdk_pid2078272 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2079834 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2086002 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2091913 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2102135 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2102195 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2107723 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2107957 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2108291 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2108721 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2108815 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2115500 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2116110 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2121989 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2125336 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2132490 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2143073 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2152556 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2152558 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2178842 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2179140 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2186880 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2187264 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2194070 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2194860 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2195680 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2196425 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2197447 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2198166 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2198850 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2199536 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2205281 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2212076 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2219670 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2225695 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2231300 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2243209 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2243920 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2249609 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2249964 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2255394 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2262809 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2265889 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2279681 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2301627 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2311479 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2313356 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2314498 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2320634 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2323902 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2332473 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2332484 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2339188 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2341434 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2343902 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2345091 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2347609 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2348838 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2359794 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2360388 00:45:06.846 Removing: /var/run/dpdk/spdk_pid2361051 00:45:07.106 Removing: /var/run/dpdk/spdk_pid2364119 00:45:07.106 Removing: /var/run/dpdk/spdk_pid2364794 00:45:07.106 Removing: /var/run/dpdk/spdk_pid2365277 00:45:07.106 Removing: /var/run/dpdk/spdk_pid2370266 00:45:07.106 Removing: /var/run/dpdk/spdk_pid2370366 00:45:07.107 Removing: /var/run/dpdk/spdk_pid2372093 00:45:07.107 Removing: /var/run/dpdk/spdk_pid2372587 00:45:07.107 Removing: /var/run/dpdk/spdk_pid2372862 00:45:07.107 Clean 00:45:07.107 08:42:11 -- common/autotest_common.sh@1453 -- # return 0 00:45:07.107 08:42:11 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:45:07.107 08:42:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:07.107 08:42:11 -- common/autotest_common.sh@10 -- # set +x 00:45:07.107 08:42:11 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:45:07.107 08:42:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:07.107 08:42:11 -- common/autotest_common.sh@10 -- # set +x 00:45:07.107 08:42:11 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:07.107 08:42:11 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:07.107 08:42:11 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:07.107 08:42:11 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:45:07.107 08:42:11 -- spdk/autotest.sh@398 -- # hostname 00:45:07.107 08:42:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:07.368 geninfo: WARNING: invalid characters removed from testname! 00:45:34.044 08:42:37 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:35.954 08:42:40 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:37.868 08:42:42 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:39.778 08:42:44 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:41.160 08:42:45 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:43.073 08:42:47 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:44.458 08:42:49 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:44.458 08:42:49 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:44.458 08:42:49 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:45:44.458 08:42:49 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:44.458 08:42:49 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:44.458 08:42:49 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:44.458 + [[ -n 1661671 ]] 00:45:44.458 + sudo kill 1661671 00:45:44.468 [Pipeline] } 00:45:44.483 [Pipeline] // stage 00:45:44.488 [Pipeline] } 00:45:44.502 [Pipeline] // timeout 00:45:44.508 [Pipeline] } 00:45:44.523 [Pipeline] // catchError 00:45:44.528 [Pipeline] } 00:45:44.543 [Pipeline] // wrap 00:45:44.549 [Pipeline] } 00:45:44.562 [Pipeline] // catchError 00:45:44.571 [Pipeline] stage 00:45:44.573 [Pipeline] { (Epilogue) 00:45:44.586 [Pipeline] catchError 00:45:44.587 [Pipeline] { 00:45:44.600 [Pipeline] echo 00:45:44.601 Cleanup processes 00:45:44.607 [Pipeline] sh 00:45:44.929 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:44.929 2386826 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:44.943 [Pipeline] sh 00:45:45.233 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:45.233 ++ grep -v 'sudo pgrep' 00:45:45.233 ++ awk '{print $1}' 00:45:45.233 + sudo kill -9 00:45:45.233 + true 00:45:45.246 [Pipeline] sh 00:45:45.535 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:57.947 [Pipeline] sh 00:45:58.237 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:58.238 Artifacts sizes are good 00:45:58.254 [Pipeline] archiveArtifacts 00:45:58.262 Archiving artifacts 00:45:58.405 [Pipeline] sh 00:45:58.693 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:58.708 [Pipeline] cleanWs 00:45:58.719 [WS-CLEANUP] Deleting project workspace... 00:45:58.719 [WS-CLEANUP] Deferred wipeout is used... 00:45:58.727 [WS-CLEANUP] done 00:45:58.729 [Pipeline] } 00:45:58.747 [Pipeline] // catchError 00:45:58.759 [Pipeline] sh 00:45:59.047 + logger -p user.info -t JENKINS-CI 00:45:59.058 [Pipeline] } 00:45:59.074 [Pipeline] // stage 00:45:59.080 [Pipeline] } 00:45:59.142 [Pipeline] // node 00:45:59.149 [Pipeline] End of Pipeline 00:45:59.181 Finished: SUCCESS